Rete #adessonews

Practical XPath for Web Scraping

In this tutorial, we are going to see how to use XPath expressions in your Python code to extract data from the web

Practical XPath for Web Scraping Practical XPath for Web Scraping

XPath is a technology that uses path expressions to select nodes or node- sets in an XML document (or in our case an HTML document). Even if XPath is not a programming language in itself, it allows you to write expressions that can access directly to a specific HTML element without having to go through the entire HTML tree.

It looks like the perfect tool for web scraping right? At ScrapingBee we love XPath!

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

In our previous article about web scraping with Python we talked a little bit about XPath expression. Now it’s time to dig a bit deeper into this subject.

Why learn XPath

  • Knowing how to use basic XPath expressions is a must-have skill when extracting data from a web page.
  • It’s more powerful than CSS selectors
  • It allows you to navigate the DOM in any direction
  • Can match text inside HTML elements

Entire books have been written on XPath, and I don’t have the pretention to explain everything in-depth, this is an introduction to XPath and we will see through real examples how you can use it for your web scraping needs.

But first, let’s talk a little about the DOM

Document Object Model

I am going to assume you already know HTML, so this is just a small reminder.

As you already know, a web page is a document containing text within tags, that add meaning to the document by describing elements like titles, paragraphs, lists, links etc.

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

Let’s see a basic HTML page, to understand what the Document Object Model is.

What is the DOM ?

DOM 101

Websraping is awsome !

Here is my blog

Document Object Model Document Object Model

This HTML code is basically HTML content encapsulated inside other HTML content. The HTML hierarchy can be viewed as a tree. We can already see this hierarchy through the indentation in the HTML code.

When your web browser parses this code, it will create a tree which is an object representation of the HTML document. It is called the Document Object Model.

Below is the internal tree structure inside Google Chrome inspector :

DOM in Chrome dev tools inspector DOM in Chrome dev tools inspector

On the left we can see the HTML tree, and on the right we have the Javascript object representing the currently selected element (in this case, the

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

tag), with all its attributes.

The important thing to remember is that the DOM you see in your browser, when you right click + inspect can be really different from the actual HTML that was sent. Maybe some Javascript code was executed and dynamically changed the DOM ! For example, when you scroll on your twitter account, a request is sent by your browser to fetch new tweets, and some Javascript code is dynamically adding those new tweets to the DOM.

XPath Syntax

First let’s look at some XPath vocabulary :

• In Xpath terminology, as with HTML, there are different types of nodes : root nodes, element nodes, attribute nodes, and so called atomic values which is a synonym for text nodes in an HTML document.

• Each element node has one parent. in this example, the section element is the parent of p, details and button.

• Element nodes can have any number of children. In our example, li elements are all children of the ul element.

• Siblings are nodes that have the same parents. p, details and button are siblings.

• Ancestors a node’s parent and parent’s parent…

• Descendants a node’s children and children’s children…

There are different types of expressions to select a node in an HTML document, here are the most important ones :

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

Xpath Expression Description
nodename This is the simplest one, it select all nodes with this nodename
/ Selects from the root node (useful for writing absolute path)
// Selects nodes from the current node that matches
. Selects the current node
.. Selects the current node’s parent
@ Selects attribute
* Matches any node
@* Matches any attribute node

You can also use predicates to find a node that contains a specific value. Predicates are always in square brackets: [predicate]

Here are some examples :

Xpath Expression Description
//li[last()] Selects the last li element
//div[@class=’product’] Selects all div elements that have the class attribute with the product value.
//li[3] Selects the third li element (the index starts at 1)
//div[@class=’product’] Selects all div elements that have the class attribute with the product value.

Now we will see some examples of Xpath expressions. We can test XPath expressions inside Chrome Dev tools, so it is time to fire up Chrome.

To do so, right-click on the web page -> inspect and then cmd + f on a Mac or ctrl + f on other systems, then you can enter an Xpath expression, and the match will be highlighted in the Dev tool.

XPath expression in Chrome dev tools XPath expression in Chrome dev tools

Tip

In the dev tools, you can right-click on any DOM node, and show its full XPath expression, that you can later factorize.

Tired of getting blocked while scraping the web? Our API handles headless browsers and rotates proxies for you.

XPath with Python

There are many Python packages that allow you to use XPath expressions to select HTML elements like lxml, Scrapy or Selenium. In these examples, we are going to use Selenium with Chrome in headless mode. You can look at this article to set up your environment: Scraping Single Page Application with Python

In this example, we are going to see how to extract E-commerce product data from Ebay.com with XPath expressions.

Complex XPath expression Complex XPath expression
from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.headless = True options.add_argument(“–window-size=1920,1200”) driver = webdriver.Chrome(options=options, executable_path=r’/usr/local/bin/chromedriver’) driver.get(“https://www.ebay.com/itm/Dyson-V7-Fluffy-HEPA-Cordless-Vacuum-Cleaner-Blue-New/273976851242”) title = driver.find_element_by_xpath(‘//h1′) current_price = driver.find_element_by_xpath(“//span[@id=’prcIsum’]”) image = driver.find_element_by_xpath(“//img[@id=’icImg’]”) product_data = { ‘title’: title.text, ‘current_price’: current_price.get_attribute(‘content’), ‘image_url’: image.get_attribute(‘src’) } print(product_data) driver.quit()

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

On these three XPath expressions, we are using a // as an axis, meaning we are selecting nodes anywhere in the HTML tree. Then we are using a predicate [predicate] to match on specific IDs. IDs are supposed to be unique so it’s not a problem do to this.

But when you select an element with its class name, it’s better to use a relative path, because the class name can be used anywhere in the DOM, so the more specific you are the better. Not only that, but when the website will change (and it will), your code will be much more resilient to changes.

Automagically authenticate to a website

When you have to perform the same action on a website or extract the same type of information we can be a little smarter with our XPath expression, in order to create generic ones, and not specific XPath for each website.

In order to explain this, we’re going to make a “generic” authentication function that will take a Login URL, a username and password, and try to authenticate on the target website.

To auto-magically log into a website with your scrapers, the idea is :

Most login forms will have an tag. So we can select this password input with a simple: //input[@type=’password’]

Once we have this password input, we can use a relative path to select the username/email input. It will generally be the first preceding input that isn’t hidden: .//preceding::input[not(@type=’hidden’)]

It’s really important to exclude hidden inputs, because most of the time you will have at least one CSRF token hidden input. CSRF stands for Cross Site Request Forgery. The token is generated by the server and is required in every form submissions / POST requests. Almost every website use this mechanism to prevent CSRF attacks.

Now we need to select the enclosing form from one of the input:

.//ancestor::form

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

And with the form, we can select the submit input/button:

.//*[@type=’submit’]

Here is an example of such a function:

def autologin(driver, url, username, password): driver.get(url) password_input = driver.find_element_by_xpath(“//input[@type=’password’]”) password_input.send_keys(password) username_input = password_input.find_element_by_xpath(“.//preceding::input[not(@type=’hidden’)]”) username_input.send_keys(username) form_element = password_input.find_element_by_xpath(“.//ancestor::form”) submit_button = form_element.find_element_by_xpath(“.//*[@type=’submit’]”).click() return driver

Of course it is far from perfect, it won’t work everywhere but you get the idea.

Conclusion

XPath is very powerful when it comes to selecting HTML elements on a page, and often more powerful than CSS selectors.

One of the most difficult task when writing XPath expressions is not the expression in itself, but being precise enough to be sure to select the right element when the DOM will change, but also resilient enough to resist DOM changes.

At ScrapingBee, depending on our needs, we use XPath expressions or CSS selectors for our ready-made APIs. We will discuss the differences between the two in another blog post!

I hope you enjoyed this article, if you’re interested by CSS selectors, checkout this BeautifulSoup tutorial

Happy Scraping!

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

 

Source

L'articolo Practical XPath for Web Scraping proviene da #Adessonews Finanziamenti Agevolazioni Norme e Tributi.

Finanziamenti - Agevolazioni

Siamo operativi in tutta Italia

La rete Adessonews è un aggregatore di news e replica gli articoli senza fini di lucro ma con finalità di critica, discussione od insegnamento,

come previsto dall’art. 70 legge sul diritto d’autore e art. 41 della costituzione Italiana. Al termine di ciascun articolo è indicata la provenienza dell'articolo.

Per richiedere la rimozione dell'articolo clicca qui

Open chat
1
Ciao posso aiutarti?
Finanziamenti e agevolazioni personali e aziendali.
Utilizza questa chat per richiedere informazioni o l'attivazione di un finanziamento e/o agevolazione.
%d blogger hanno fatto clic su Mi Piace per questo: