Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Webscraping with asyncio

jmortegac
November 06, 2016

Webscraping with asyncio

Webscraping with asyncio in python

jmortegac

November 06, 2016
Tweet

More Decks by jmortegac

Other Decks in Programming

Transcript

  1. Agenda ▶ Webscraping python tools ▶ Requests vs aiohttp ▶

    Introduction to asyncio ▶ Async client/server ▶ Building a webcrawler with asyncio ▶ Alternatives to asyncio
  2. Web scraping with Python 1. Download webpage with HTTP module(requests,urllib,aiohttp)

    2. Parse the page with BeautifulSoup/lxml 3. Select elements with Regular expressions,XPath or css selectors 4. Store results in a database,csv,json
  3. BeautifulSoup functions ▪ find_all(‘a’)→Returns all links ▪ find(‘title’)→Returns the first

    element <title> ▪ get(‘href’)→Returns the attribute href value ▪ (element).text → Returns the text inside an element for link in soup.find_all('a'): print(link.get('href'))
  4. Spiders /crawlers ▶ A Web crawler is an Internet bot

    that systematically browses the World Wide Web, typically for the purpose of Web indexing. A Web crawler may also be called a Web spider. https://en.wikipedia.org/wiki/Web_crawler
  5. Scrapy ▶ Uses a mechanism based on XPath expressions called

    Xpath Selectors. ▶ Uses Parser LXML to find elements ▶ Twisted for asynchronous operations
  6. Scrapy advantages ▶ Faster than mechanize because it uses twisted

    for asynchronous operations. ▶ Scrapy has better support for html parsing. ▶ Scrapy has better support for unicode characters, redirections, gzipped responses, encodings. ▶ You can export the extracted data directly to JSON,XML and CSV.
  7. Export data ▶ scrapy crawl <spider_name> ▶ $ scrapy crawl

    <spider_name> -o items.json -t json ▶ $ scrapy crawl <spider_name> -o items.csv -t csv ▶ $ scrapy crawl <spider_name> -o items.xml -t xml ▶
  8. The concurrency problem ▶ Different approaches: ▶ Multiple processes ▶

    Threads ▶ Separate distributed machines ▶ Asynchronous programming(event loop)
  9. Requests problems ▶ Requests operations are blocking the main thread

    ▶ It pauses until operation completed ▶ We need one thread for each request if we want non-blocking operations
  10. New concepts ▶ Event loop ▶ Async ▶ Await ▶

    Futures ▶ Coroutines ▶ Tasks ▶ Executors
  11. Event loop implementations ▶ Asyncio ▶ https://docs.python.org/3.4/library/asyncio.html ▶ Tornado web

    server ▶ http://www.tornadoweb.org/en/stable ▶ Twisted ▶ https://twistedmatrix.com ▶ Gevent ▶ http://www.gevent.org
  12. Asyncio ▶ Python >=3.3 ▶ Event-loop framework ▶ I/O Asynchronous

    ▶ Non-blocking approach with sockets ▶ All requests in one thread ▶ Event-driven switching ▶ aio-http module for make requests asynchronously
  13. Requests vs aiohttp #!/usr/local/bin/python3.5 import asyncio from aiohttp import ClientSession

    async def hello(): async with ClientSession() as session: async with session.get("http://httpbin.org/headers") as response: response = await response.read() print(response) loop = asyncio.get_event_loop() loop.run_until_complete(hello()) import requests def hello() return requests.get("http://httpbin.org/get") print(hello())
  14. Event Loop ▶ An event loop allow us to write

    asynchronous code using callbacks or coroutines. ▶ Event loop function like task switcher,just the way operating systems switch between active tasks on the CPU. ▶ The idea is that we have an event loop running until all tasks scheduled are completed. ▶ Features and tasks are created through the event loop.
  15. Event Loop ▶ An event loop is used to orchestrate

    the execution of the coroutines. ▶ asyncio.get_event_loop() ▶ asyncio.run_until_complete(coroutines,futures) ▶ asyncio.run_forever() ▶ asyncio.stop()
  16. Coroutines ▶ Coroutines are functions that allow for multitasking without

    requiring multiple threads or processes. ▶ Coroutines are like functions, but they can be suspended or resumed at certain points in the code. ▶ Coroutines allow write asynchronous code that combines the efficiency of callbacks with the classic good looks of multithreaded.
  17. Coroutines 3.4 vs 3.5 import asyncio @asyncio.coroutine def fetch(self, url):

    response = yield from self.session.get(url) body = yield from response.read() import asyncio async def fetch(self, url): response = await self.session.get(url) body = await response.read()
  18. Coroutines in event loop #!/usr/local/bin/python3.5 import asyncio import aiohttp async

    def get_page(url): response = await aiohttp.request('GET', url) body = await response.read() print(body) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.wait([get_page('http://python.org'), get_page('http://pycon.org')]))
  19. Requests in event loop async def getpage_with_requests(url): return await loop.run_in_executor(None,requests.get,url)

    #methods equivalents async def getpage_with_aiohttp(url): with aitohttp.ClientSession() as session: async with session.get(url) as response: return await response.read()
  20. Tasks ▶ The asyncio.Task class is a subclass of asyncio.Future

    to encapsulate and manage coroutines. ▶ Allow independently running tasks to run concurrently with other tasks on the same event loop. ▶ When a coroutine is wrapped in a task, it connects the task to the event loop.
  21. Futures ▶ To manage an object Future in Asyncio, we

    must declare the following: ▶ import asyncio ▶ future = asyncio.Future() ▶ https://docs.python.org/3/library/asyncio -task.html#future ▶ https://docs.python.org/3/library/concurr ent.futures.html
  22. Futures ▶ The asyncio.Future class is essentially a promise of

    a result. ▶ A Future will returns the results when they are available, and once it receives results, it will pass them along to all the registered callbacks. ▶ Each future is a task to be executed in the event loop
  23. Semaphores ▶ Adding synchronization ▶ Limiting number of concurrent requests.

    ▶ The argument indicates the number of simultaneous requests we want to allow. ▶ sem = asyncio.Semaphore(5) with (await sem): page = await get(url, compress=True)
  24. Async Web crawler ▶ Send asynchronous requests to all the

    links on a web page and add the responses to a queue to be processed as we go. ▶ Coroutines allow running independent tasks and processing their results in 3 ways: ▶ Using asyncio.as_completed →by processing the results as they come. ▶ Using asyncio.gather→ only once they have all finished loading. ▶ Using asyncio.ensure_future
  25. Async Web crawler import asyncio import random @asyncio.coroutine def get_url(url):

    wait_time = random.randint(1, 4) yield from asyncio.sleep(wait_time) print('Done: URL {} took {}s to get!'.format(url, wait_time)) return url, wait_time @asyncio.coroutine def process_results_as_come_in(): coroutines = [get_url(url) for url in ['URL1', 'URL2', 'URL3']] for coroutine in asyncio.as_completed(coroutines): url, wait_time = yield from coroutine print('Coroutine for {} is done'.format(url)) def main(): loop = asyncio.get_event_loop() print(“Process results as they come in:") loop.run_until_complete(process_results_as_come_in()) if __name__ == '__main__': main() asyncio.as_completed
  26. Async Web crawler import asyncio import random @asyncio.coroutine def get_url(url):

    wait_time = random.randint(1, 4) yield from asyncio.sleep(wait_time) print('Done: URL {} took {}s to get!'.format(url, wait_time)) return url, wait_time @asyncio.coroutine def process_once_everything_ready(): coroutines = [get_url(url) for url in ['URL1', 'URL2', 'URL3']] results = yield from asyncio.gather(*coroutines) print(results) def main(): loop = asyncio.get_event_loop() print(“Process results once they are all ready:") loop.run_until_complete(process_once_everything_ready()) if __name__ == '__main__': main() asyncio.gather
  27. asyncio.gather From Python documentation, this is what asyncio.gather does: asyncio.gather(*coros_or_futures,

    loop=None, return_exceptions=False) Return a future aggregating results from the given coroutine objects or futures. All futures must share the same event loop. If all the tasks are done successfully, the returned future’s result is the list of results (in the order of the original sequence, not necessarily the order of results arrival). If return_exceptions is True, exceptions in the tasks are treated the same as successful results, and gathered in the result list; otherwise, the first raised exception will be immediately propagated to the returned future.
  28. Async Web crawler import asyncio import random @asyncio.coroutine def get_url(url):

    wait_time = random.randint(1, 4) yield from asyncio.sleep(wait_time) print('Done: URL {} took {}s to get!'.format(url, wait_time)) return url, wait_time @asyncio.coroutine def process_ensure_future(): tasks= [asyncio.ensure_future(get_url(url) )for url in ['URL1', 'URL2', 'URL3']] results = yield from asyncio.wait(tasks) print(results) def main(): loop = asyncio.get_event_loop() print(“Process ensure future:") loop.run_until_complete(process_ensure_future()) if __name__ == '__main__': main() asyncio.ensure_future
  29. Alternatives to asyncio ▶ ThreadPoolExecutor ▶ https://docs.python.org/3.5/library/concurrent.futures.html#concurrent.fut ures.ThreadPoolExecutor ▶ ProcessPoolExecutor

    ▶ https://docs.python.org/3.5/library/concurrent.futures.html#concur rent.futures.ProcessPoolExecutor ▶ Parallel python ▶ http://www.parallelpython.com
  30. Parallel python ▶ SMP(symmetric multiprocessing) architecture with multiple cores in

    the same machine ▶ Distribute tasks in multiple machines ▶ Cluster
  31. References ▶ http://www.crummy.com/software/BeautifulSoup ▶ http://scrapy.org ▶ http://docs.webscraping.com ▶ https://github.com/KeepSafe/aiohttp ▶

    http://aiohttp.readthedocs.io/en/stable/ ▶ https://docs.python.org/3.4/library/asyncio.html ▶ https://github.com/REMitchell/python-scraping