In my previous blog Datasets for Machine Learning, I introduced many datasets for machine learning. However, you might not be able to find a dataset that is suitable for your own research or project. The data that you need are published online but not archived. In this case, you need to scrap those data by your own. Of course, if you want to use these scraping data for a commercial purpose, you should take action carefully and seriously in a legal way.


Please pay attention to the below:

  1. Read the terns and conditions about the data usage in the website you want to scrap.
  2. Once getting the permission, be polite and friendly when scraping data. DO NOT send too frequently requests to the website in order to avoid increasing too much unnecessary pressure to the server.
  3. After scraping the data, use them legally.

Scraping Tools

Here are some tools or libraries in Python or Python-supported for web scraping:

  1. BeautifulSoup: a Python package for parsing HTML and XML documents.
  2. Scrapy: an open source, collaborative, fast and high-level web crawling & scraping framework for extracting the data from websites in a fast, simple, yet extensible way.
  3. pyspider: a powerful spider(web crawler) system in Python.
  4. pyquery: a jquery-like library that allows to make jquery queries on xml documents.
  5. webscraping: a library for web scraping or website navigation.
  6. Selenium: a suite of tools to automate web browsers across many platforms.

Scraping Flow

  1. Before scraping, be sure and clear that the data are useful for your analysis. Otherwise, you will just get some useless data and waste your time.
  2. Explore the structure of the website or page. Here are two websites that provide data about soccer: KassiesA: UEFA European Cup Football and
  3. Based on the structure, write your own spider with the libraries to extract the data that you want.
  4. Save the data locally well for analysis.


Those libraries and tools are powerful and easy to use. I will describe how to use some of those libraries or tools in detail in the future.


  1. BeautifulSoup
  2. Scrapy
  3. pyspider
  4. pyquery
  5. webscraping
  6. Selenium

blog comments powered by Disqus


11 April 2019