Web Scraper Url List



Create a “for” loop scraping all the href attributes (and so the URLs) for all the pages we want. Clean the data and create a list containing all the URLs collected. Create a new loop that goes over the list of URLs to scrape all the information needed. Clean the data and create the final dataframe. ScrapingBee review. I know I know It sounds a bit pushy to immediately talk about our service but. 1) All the URLs should share the similar layout. 2) Add no more than 20,000 URLs. 3) You will need to manual copy and paste the URLs into “List of URLs” text box. 4) After entering all the URLs, “Go To Webpage” action will be automatically created in “Loop Item”. Click to extract data points from one webpage.

  1. How To Web Scrape Data From Multiple URLs | ParseHub
  2. How To Scrape Multiple Pages Of A Website Using A Python Web ...
  3. Web Scraper Url List 2017
  4. Magic Url Scraper

Table of Contents

Introduction

This coding project is building on my first web scraping project and uses some of the code I developed in that project to get the data from Facebook.I am sure you have seen posts from a company’s or small business’s Facebook page where they ask you to like, share and follow in order to win a prize.But, how do these businesses then determine who have won? They will have to go and compare the names of all the people who:

  • liked the post,
  • shared the post and
  • followed their page

and then randomly select a winner from that group of people. If there are about 10 people who engaged with that promotional post, then it won’t take too long, but imagine if hundreds of people participated? It will take quite a lot of time to determine the winner then…

To help save a bit of time, we are going to construct a program that will scrape all the names of the people who engaged with the promotion post and randomly select a winner from the eligible participants.

Right! Let’s get started!

The data source

We are interested in the names of people who follow the business’s page and who liked and shared the post in question. All of this information is available on Facebook. In the end we will have three lists:

  • a list containing the names of people who follow the business’s Facebook page,
  • a list of people who have liked the promotional post and
  • a list of people who have shared the post.

The names that appear in all three lists, will be put in a final list and a random winner will be selected from the list of eligible names.

Coding the web scraper

Requirements

For this program to work, you will need to install:

  • Python 3.7
  • Requests library
  • Beautifulsoup library
  • Selenium library

Get the HTML data

First thing we need to do is to get the HTML data of the Facebook post we are interested in, into our Python script. We can do this using the Python library, requests.

Web Scraper Url ListWeb Scraper Url List

Web scraping Facebook is a bit different, as the content is behind a login page. We will need to first have a look at the HTML code of the login page. For this project, we are going to log into https://mbasic.facebook.com:

  • start working your way through the HTML until you find the <form> HTML tag.
  • within the <form> tag look for the method='post' argument.
  • go down further into the form tag and look for the <input> tags. There should be at least two (username and password). The username input tag is generally of type=email and the password, type=password.
  • look within these input tags for a name argument. This is the name of this input field. This is also how requests is going to know where to “enter” your credentials. For this it is name='email' and name='pass'.

Log into Facebook

The following code can be used to log into your Facebook account in order to scrape the necessary data that we need:

Scrape the data

The URL is standard for most posts. You will need the following:

How To Web Scrape Data From Multiple URLs | ParseHub

  • ID of post
  • amount of people who engaged with the post

With the two variables known, the URL where we will scrape the data is:

Now we can scrape the data by adding the following line:

Zooming in

With the HTML data stored in the variable r we can get the names of the people who liked the post with the following:

The list people_who_liked has been created can will be used later.

People who shared

The same can be done for getting the list of people who shared the post, with a few changes to the code for BeautifulSoup:

People who follows your page

This one is a bit more tricky. We can only access the list of people who follows a page, if you have administrator rights to that pages. Also, the list is not shown when you access it through the website address https://mbasic.facebook.com and https://m.facebook.com and we will thus need another approach to get the HTML data through the use of the Selenium library:

  • Log in to Facebook
  • Navigate to page settings listing the people who liked our page
  • Scroll down until all the names are displayed on the screen
  • Scrape the list of names

Load the page of names

Selenium can take control of the browser and log in:

The sleep() command gives the page time to load before entering the login details. Next we navigate to the page settings listing all the names:

For the above code we assume that 15 page scrolls will show all the names of the people who follow the page, but you can increase it depending on the amount of people who follow your page.

Determine the winner

Url

We now have three lists:

  • People who liked the page
  • People who liked the post
  • People who shared the post

We need to create a new list that has all the names of the people who appear in all three of the above lists:

The last thing to do is to select a winner randomly:

You can also select more than one winner:

where n is the amount of winners you want to select.

Conclusion

This code will ensure that you can select a winner to the promotional post in only a few minutes, rather than doing it manually.

Related

C# is still a popular backend programming language, and you might find yourself in need of it for scraping a web page (or multiple pages). In this article, we will cover scraping with C# using an HTTP request, parsing the results, and then extracting the information that you want to save. This method is common with basic scraping, but you will sometimes come across single-page web applications built in JavaScript such as Node.js, which require a different approach. We’ll also cover scraping these pages using PuppeteerSharp, Selenium WebDriver, and Headless Chrome.

Note: This article assumes that the reader is familiar with C# syntax and HTTP request libraries. The PuppeteerSharp and Selenium WebDriver .NET libraries are available to make integration of Headless Chrome easier for developers. Also, this project is using .NET Core 3.1 framework and the HTML Agility Pack for parsing raw HTML.

Part I: Static Pages

Setup

If you’re using C# as a language, you probably already use Visual Studio. This article uses a simple .NET Core Web Application project using MVC (Model View Controller). After you create a new project, go to the NuGet Package Manager where you can add the necessary libraries used throughout this tutorial.

In NuGet, click the “Browse” tab and then type “HTML Agility Pack” to find the dependency.

Install the package, and then you’re ready to go. This package makes it easy to parse the downloaded HTML and find tags and information that you want to save.

Finally, before you get started with coding the scraper, you need the following libraries added to the codebase:

Making an HTTP Request to a Web Page in C#

Imagine that you have a scraping project where you need to scrape Wikipedia for information on famous programmers. Wikipedia has a page with a list of famous programmers with links to each profile page. You can scrape this list and add it to a CSV file (or Excel spreadsheet) to save for future review and use. This is just one simple example of what you can do with web scraping, but the general concept is to find a site that has the information you need, use C# to scrape the content, and store it for later use. In more complex projects, you can crawl pages using the links found on a top category page.

Using .NET HTTP Libraries to Retrieve HTML

.NET Core introduced asynchronous HTTP request libraries to the framework. These libraries are native to .NET, so no additional libraries are needed for basic requests. Before you make the request, you need to build the URL and store it in a variable. Because we already know the page that we want to scrape, a simple URL variable can be added to the HomeController’s Index() method. The HomeController Index() method is the default call when you first open an MVC web application.

Add the following code to the Index() method in the HomeController file:

Using .NET HTTP libraries, a static asynchronous task is returned from the request, so it’s easier to put the request functionality in its own static method. Add the following method to the HomeController file:

Let’s break down each line of code in the above CallUrl() method.

This statement creates an HttpClient variable, which is an object from the native .NET framework.

If you get HTTPS handshake errors, it’s likely because you are not using the right cryptographic library. The above statement forces the connection to use the TLS 1.3 library so that an HTTPS handshake can be established. Note that TLS 1.3 is deprecated but some web servers do not have the latest 2.0+ libraries installed. For this basic task, cryptographic strength is not important but it could be for some other scraping requests involving sensitive data.

This statement clears headers should you decide to add your own. For instance, you might scrape content using an API request that requires a Bearer authorization token. In such a scenario, you would then add a header to the request. For example:

The above would pass the authorization token to the web application server to verify that you have access to the data. Next, we have the last two lines:

These two statements retrieve the HTML content, await the response (remember this is asynchronous) and return it to the HomeController’s Index() method where it was called. The following code is what your Index() method should contain (for now):

The code to make the HTTP request is done. We still haven’t parsed it yet, but now is a good time to run the code to ensure that the Wikipedia HTML is returned instead of any errors. Make sure you set a breakpoint in the Index() method at the following line:

This will ensure that you can use the Visual Studio debugger UI to view the results.

You can test the above code by clicking the “Run” button in the Visual Studio menu:

Visual Studio will stop at the breakpoint, and now you can view the results.

If you click “HTML Visualizer” from the context menu, you can see a raw HTML view of the results, but you can see a quick preview by just hovering your mouse over the variable. You can see that HTML was returned, which means that an error did not occur.

Parsing the HTML

With the HTML retrieved, it’s time to parse it. HTML Agility Pack is a common tool, but you may have your own preference. Even LINQ can be used to query HTML, but for this example and for ease of use, the Agility Pack is preferred and what we will use.

Before you parse the HTML, you need to know a little bit about the structure of the page so that you know what to use as markers for your parsing to extract only what you want and not every link on the page. You can get this information using the Chrome Inspect function. In this example, the page has a table of contents links at the top that we don’t want to include in our list. You can also take note that every link is contained within an <li> element.

From the above inspection, we know that we want the content within the “li” element but not the ones with the tocsection class attribute. With the Agility Pack, we can eliminate them from the list.

We will parse the document in its own method in the HomeController, so create a new method named ParseHtml() and add the following code to it:

In the above code, a generic list of strings (the links) is created from the parsed HTML with a list of links to famous programmers on the selected Wikipedia page. We use LINQ to eliminate the table of content links, so now we just have the HTML content with links to programmer profiles on Wikipedia. We use .NET’s native functionality in the foreach loop to parse the first anchor tag that contains the link to the programmer profile. Because Wikipedia uses relative links in the href attribute, we manually create the absolute URL to add convenience when a reader goes into the list to click each link.

Exporting Scraped Data to a File

The code above opens the Wikipedia page and parses the HTML. We now have a generic list of links from the page. Now, we need to export the links to a CSV file. We’ll make another method named WriteToCsv() to write data from the generic list to a file. The following code is the full method that writes the extracted links to a file named “links.csv” and stores it on the local disk.

The above code is all it takes to write data to a file on local storage using native .NET framework libraries.

The full HomeController code for this scraping section is below.

Part II: Scraping Dynamic JavaScript Pages

In the previous section, data was easily available to our scraper because the HTML was constructed and returned to the scraper the same way a browser would receive data. Newer JavaScript technologies such as Vue.js render pages using dynamic JavaScript code. When a page uses this type of technology, a basic HTTP request won’t return HTML to parse. Instead, you need to parse data from the JavaScript rendered in the browser.

Dynamic JavaScript isn’t the only issue. Some sites detect if JavaScript is enabled or evaluate the UserAgent value sent by the browser. The UserAgent header is a value that tells the web server the type of browser being used to access pages (e.g. Chrome, FireFox, etc). If you use web scraper code, no UserAgent is sent and many web servers will return different content based on UserAgent values. Some web servers will use JavaScript to detect when a request is not from a human user.

You can overcome this issue using libraries that leverage Headless Chrome to render the page and then parse the results. We’re introducing two libraries freely available from NuGet that can be used in conjunction with Headless Chrome to parse results. PuppeteerSharp is the first solution we use that makes asynchronous calls to a web page. The other solution is Selenium WebDriver, which is a common tool used in automated testing of web applications.

Using PuppeteerSharp with Headless Chrome

For this example, we will add the asynchronous code directly into the HomeController’s Index() method. This requires a small change to the default Index() method shown in the code below.

In addition to the Index() method changes, you must also add the library reference to the top of your HomeController code. Before you can use Puppeteer, you first must install the library from NuGet and then add the following line in your using statements:

Now, it’s time to add your HTTP request and parsing code. In this example, we’ll extract all URLs (the <a> tag) from the page. Add the following code to the HomeController to pull the page source in Headless Chrome, making it available for us to extract links (note the change in the Index() method, which replaces the same method in the previous section example):

Similar to the previous example, the links found on the page were extracted and stored in a generic list named programmerLinks. Notice that the path to chrome.exe is added to the options variable. If you don’t specify the executable path, Puppeteer will be unable to initialize Headless Chrome.

Using Selenium with Headless Chrome

How To Scrape Multiple Pages Of A Website Using A Python Web ...

If you don’t want to use Puppeteer, you can use Selenium WebDriver. Selenium is a common tool used in automation testing on web applications, because in addition to rendering dynamic JavaScript code, it can also be used to emulate human actions such as clicks on a link or button. To use this solution, you need to go to NuGet and install Selenium.WebDriver and (to use Headless Chrome) Selenium.WebDriver.ChromeDriver. Note: Selenium also has drivers for other popular browsers such as FireFox.

Add the following library to the using statements:

Web scraper url list 2017

Now, you can add the code that will open a page and extract all links from the results. The following code demonstrates how to extract links and add them to a generic list.

Notice that the Selenium solution is not asynchronous, so if you have a large pool of links and actions to take on a page, it will freeze your program until the scraping completes. This is the main difference between the previous solution using Puppeteer and Selenium.

Web Scraper Url List 2017

Conclusion

Magic Url Scraper

Web scraping is a powerful tool for developers who need to obtain large amounts of data from a web application. With pre-packaged dependencies, you can turn a difficult process into only a few lines of code.

One issue we didn’t cover is getting blocked either from remote rate limits or blocks put on bot detection. Your code would be considered a bot by some applications that want to limit the number of bots accessing data. Our web scraping API can overcome this limitation so that developers can focus on parsing HTML and obtaining data rather than determining remote blocks.