- Web Scraping Libraries In Python Tutorial
- Web Scraping Libraries In Python Free
- Web Scraping Libraries In Python Programming
The internet has an amazingly wide variety of information for human consumption. But this data is often difficult to access programmatically if it doesn't come in the form of a dedicated REST API. With Python tools like Beautiful Soup, you can scrape and parse this data directly from web pages to use for your projects and applications.
Libraries & Tools. Beautiful Soup is a Python library for pulling data out of HTML and XML files. Requests allow you to send HTTP requests very easily. Web scraping tool. With Python tools like Beautiful Soup, you can scrape and parse this data directly from web pages to use for your projects and applications. Let's use the example of scraping MIDI data from the internet to train a neural network with Magenta that can generate classic Nintendo-sounding music.
- Some of these are libraries that can solve a specific part of the web scraping process.
- How to Setup the Scraping Project. Our setup is pretty simple. Just create a folder and install Beautiful Soup, pandas, and requests. To create a folder and install the libraries, enter the commands given below. I am assuming that you have already installed Python 3.x. Mkdir scraper pip install beautifulsoup4 pip install requests pip install pandas.
Let's use the example of scraping MIDI data from the internet to train a neural network with Magenta that can generate classic Nintendo-sounding music. In order to do this, we'll need a set of MIDI music from old Nintendo games. Using Beautiful Soup we can get this data from the Video Game Music Archive.
Getting started and setting up dependencies
Before moving on, you will need to make sure you have an up to date version of Python 3 and pip installed. Make sure you create and activate a virtual environment before installing any dependencies.
You'll need to install the Requests library for making HTTP requests to get data from the web page, and Beautiful Soup for parsing through the HTML.
With your virtual environment activated, run the following command in your terminal:
We're using Beautiful Soup 4 because it's the latest version and Beautiful Soup 3 is no longer being developed or supported.
Using Requests to scrape data for Beautiful Soup to parse
First let's write some code to grab the HTML from the web page, and look at how we can start parsing through it. The following code will send a GET
request to the web page we want, and create a BeautifulSoup object with the HTML from that page:
With this soup
object, you can navigate and search through the HTML for data that you want. For example, if you run soup.title
after the previous code in a Python shell you'll get the title of the web page. If you run print(soup.get_text())
, you will see all of the text on the page.
Getting familiar with Beautiful Soup
The find() and find_all() methods are among the most powerful weapons in your arsenal. soup.find()
is great for cases where you know there is only one element you're looking for, such as the body
tag. On this page, soup.find(id='banner_ad').text
will get you the text from the HTML element for the banner advertisement.
soup.find_all()
is the most common method you will be using in your web scraping adventures. Using this you can iterate through all of the hyperlinks on the page and print their URLs:
You can also provide different arguments to find_all
, such as regular expressions or tag attributes to filter your search as specifically as you want. You can find lots of cool features in the documentation.
Parsing and navigating HTML with BeautifulSoup
Before writing more code to parse the content that we want, let’s first take a look at the HTML that’s rendered by the browser. Every web page is different, and sometimes getting the right data out of them requires a bit of creativity, pattern recognition, and experimentation.
Our goal is to download a bunch of MIDI files, but there are a lot of duplicate tracks on this webpage as well as remixes of songs. We only want one of each song, and because we ultimately want to use this data to train a neural network to generate accurate Nintendo music, we won't want to train it on user-created remixes.
When you're writing code to parse through a web page, it's usually helpful to use the developer tools available to you in most modern browsers. If you right-click on the element you're interested in, you can inspect the HTML behind that element to figure out how you can programmatically access the data you want.
Let's use the find_all
method to go through all of the links on the page, but use regular expressions to filter through them so we are only getting links that contain MIDI files whose text has no parentheses, which will allow us to exclude all of the duplicates and remixes.
Create a file called nes_midi_scraper.py
and add the following code to it:
This will filter through all of the MIDI files that we want on the page, print out the link tag corresponding to them, and then print how many files we filtered.
Run the code in your terminal with the command python nes_midi_scraper.py
.
Downloading the MIDI files we want from the webpage
Now that we have working code to iterate through every MIDI file that we want, we have to write code to download all of them.
In nes_midi_scraper.py
, add a function to your code called download_track
, and call that function for each track in the loop iterating through them:
In this download_track
function, we're passing the Beautiful Soup object representing the HTML element of the link to the MIDI file, along with a unique number to use in the filename to avoid possible naming collisions.
Run this code from a directory where you want to save all of the MIDI files, and watch your terminal screen display all 2230 MIDIs that you downloaded (at the time of writing this). This is just one specific practical example of what you can do with Beautiful Soup.
The vast expanse of the World Wide Web
Now that you can programmatically grab things from web pages, you have access to a huge source of data for whatever your projects need. One thing to keep in mind is that changes to a web page’s HTML might break your code, so make sure to keep everything up to date if you're building applications on top of this.
If you're looking for something to do with the data you just grabbed from the Video Game Music Archive, you can try using Python libraries like Mido to work with MIDI data to clean it up, or use Magenta to train a neural network with it or have fun building a phone number people can call to hear Nintendo music.
I’m looking forward to seeing what you build. Feel free to reach out and share your experiences or ask any questions.
- Email: sagnew@twilio.com
- Twitter: @Sagnewshreds
- Github: Sagnew
- Twitch (streaming live code): Sagnewshreds
- Python Web Scraping Tutorial
- Python Web Scraping Resources
- Selected Reading
In this chapter, let us learn various Python modules that we can use for web scraping.
Python Development Environments using virtualenv
Virtualenv is a tool to create isolated Python environments. With the help of virtualenv, we can create a folder that contains all necessary executables to use the packages that our Python project requires. It also allows us to add and modify Python modules without access to the global installation.
You can use the following command to install virtualenv −
Now, we need to create a directory which will represent the project with the help of following command −
Now, enter into that directory with the help of this following command −
Now, we need to initialize virtual environment folder of our choice as follows −
Now, activate the virtual environment with the command given below. Once successfully activated, you will see the name of it on the left hand side in brackets.
We can install any module in this environment as follows −
For deactivating the virtual environment, we can use the following command −
You can see that (websc) has been deactivated.
Python Modules for Web Scraping
Web scraping is the process of constructing an agent which can extract, parse, download and organize useful information from the web automatically. In other words, instead of manually saving the data from websites, the web scraping software will automatically load and extract data from multiple websites as per our requirement.
In this section, we are going to discuss about useful Python libraries for web scraping.
Requests
It is a simple python web scraping library. It is an efficient HTTP library used for accessing web pages. With the help of Requests, we can get the raw HTML of web pages which can then be parsed for retrieving the data. Before using requests, let us understand its installation.
Installing Requests
We can install it in either on our virtual environment or on the global installation. With the help of pip command, we can easily install it as follows −
Example
In this example, we are making a GET HTTP request for a web page. For this we need to first import requests library as follows −
In this following line of code, we use requests to make a GET HTTP requests for the url: https://authoraditiagarwal.com/ by making a GET request.
Web Scraping Libraries In Python Tutorial
Now we can retrieve the content by using .text property as follows −
Observe that in the following output, we got the first 200 characters.
Urllib3
It is another Python library that can be used for retrieving data from URLs similar to the requests library. You can read more on this at its technical documentation athttps://urllib3.readthedocs.io/en/latest/.
Installing Urllib3
Using the pip command, we can install urllib3 either in our virtual environment or in global installation.
Example: Scraping using Urllib3 and BeautifulSoup
In the following example, we are scraping the web page by using Urllib3 and BeautifulSoup. We are using Urllib3 at the place of requests library for getting the raw data (HTML) from web page. Then we are using BeautifulSoup for parsing that HTML data.
This is the output you will observe when you run this code −
Selenium
It is an open source automated testing suite for web applications across different browsers and platforms. It is not a single tool but a suite of software. We have selenium bindings for Python, Java, C#, Ruby and JavaScript. Here we are going to perform web scraping by using selenium and its Python bindings. You can learn more about Selenium with Java on the link Selenium.
Selenium Python bindings provide a convenient API to access Selenium WebDrivers like Firefox, IE, Chrome, Remote etc. The current supported Python versions are 2.7, 3.5 and above.
Installing Selenium
Using the pip command, we can install urllib3 either in our virtual environment or in global installation.
As selenium requires a driver to interface with the chosen browser, we need to download it. The following table shows different browsers and their links for downloading the same.
Chrome |
Edge |
Firefox |
Safari |
Web Scraping Libraries In Python Free
Example
This example shows web scraping using selenium. It can also be used for testing which is called selenium testing.
After downloading the particular driver for the specified version of browser, we need to do programming in Python.
First, need to import webdriver from selenium as follows −
Now, provide the path of web driver which we have downloaded as per our requirement −
Now, provide the url which we want to open in that web browser now controlled by our Python script.
We can also scrape a particular element by providing the xpath as provided in lxml.
You can check the browser, controlled by Python script, for output.
Scrapy
Scrapy is a fast, open-source web crawling framework written in Python, used to extract the data from the web page with the help of selectors based on XPath. Scrapy was first released on June 26, 2008 licensed under BSD, with a milestone 1.0 releasing in June 2015. It provides us all the tools we need to extract, process and structure the data from websites.
Installing Scrapy
Using the pip command, we can install urllib3 either in our virtual environment or in global installation.
Web Scraping Libraries In Python Programming
For more detail study of Scrapy you can go to the linkScrapy