Friday 3 July 2015

SFTW: Scraping data with Google Refine

For the first Something For The Weekend of 2012 I want to tackle a common problem when you’re trying to scrape a collection of webpage: they have some sort of structure in their URL like this, where part of the URL refers to the name or code of an entity:     http://www.ltscotland.org.uk/scottishschoolsonline/schools/freemealentitlement.asp?iSchoolID=5237521

  tp://www.ltscotland.org.uk/scottishschoolsonline/schools/freemealentitlement.asp?iSchoolID=5237629

    ttp://www.ltscotland.org.uk/scottishschoolsonline/schools/freemealentitlement.asp?iSchoolID=5237823

In this instance, you can see that the URL is identical apart from a 7 digit code at the end: the ID of the school the data refers to.

There are a number of ways you could scrape this data. You could use Google Docs and the =importXML formula, but Google Docs will only let you use this 50 times on any one spreadsheet (you could copy the results and select Edit > Paste Special > Values Only and then use the formula a further 50 times if it’s not too many – here’s one I prepared earlier).

And you could use Scraperwiki to write a powerful scraper – but you need to understand enough coding to do so quickly (here’s a demo I prepared earlier).

A middle option is to use Google Refine, and here’s how you do it.

Assembling the ingredients

With the basic URL structure identified, we already have half of our ingredients. What we need  next is a list of the ID codes that we’re going to use to complete each URL.

An advanced search for “list seed number scottish schools filetype:xls” brings up a link to this spreadsheet (XLS) which gives us just that.

The spreadsheet will need editing: remove any rows you don’t need. This will reduce the time that the scraper will take in going through them. For example, if you’re only interested in one local authority, or one type of school, sort your spreadsheet so that you can delete those above or below them.

Now to combine  the ID codes with the base URL.

Bringing your data into Google Refine

Open Google Refine and create a new project with the edited spreadsheet containing the school IDs.

At the top of the school ID column click on the drop-down menu and select Edit column > Add column based on this column…

In the New column name box at the top call this ‘URL’.

In the Expression box type the following piece of GREL (Google Refine Expression Language):

“http://www.ltscotland.org.uk/scottishschoolsonline/schools/freemealentitlement.asp?iSchoolID=”+value

(Type in the quotation marks yourself – if you’re copying them from a webpage you may have problems)

The ‘value’ bit means the value of each cell in the column you just selected. The plus sign adds it to the end of the URL in quotes.

In the Preview window you should see the results – you can even copy one of the resulting URLs and paste it into a browser to check it works. (On one occasion Google Refine added .0 to the end of the ID number, ruining the URL. You can solve this by changing ‘value’ to value.substring(0,7) – this extracts the first 7 characters of the ID number, omitting the ‘.0') UPDATE: in the comment Thad suggests “perhaps, upon import of your spreadsheet of IDs, you forgot to uncheck the importer option to Parse as numbers?”

Click OK if you’re happy, and you should have a new column with a URL for each school ID.

Grabbing the HTML for each page

Now click on the top of this new URL column and select Edit column > Add column by fetching URLs…

In the New column name box at the top call this ‘HTML’.

All you need in the Expression window is ‘value’, so leave that as it is.

Click OK.

Google Refine will now go to each of those URLs and fetch the HTML contents. As we have a couple thousand rows here, this will take a long time – hours, depending on the speed of your computer and internet connection (it may not work at all if either isn’t very fast). So leave it running and come back to it later.

Extracting data from the raw HTML with parseHTML

When it’s finished you’ll have another column where each cell is a bunch of HTML. You’ll need to create a new column to extract what you need from that, and you’ll also need some GREL expressions explained here.

First you need to identify what data you want, and where it is in the HTML. To find it, right-click on one of the webpages containing the data, and search for a key phrase or figure that you want to extract. Around that data you want to find a HTML tag like <table class=”destinations”> or <div id=”statistics”>. Keep that open in another window while you tweak the expression we come onto below…

Back in Google Refine, at the top of the HTML column click on the drop-down menu and select Edit column > Add column based on this column…

In the New column name box at the top give it a name describing the data you’re going to pull out.

In the Expression box type the following piece of GREL (Google Refine Expression Language):

value.parseHtml().select(“table.destinations”)[0].select(“tr”).toString()

(Again, type the quotation marks yourself rather than copying them from here or you may have problems)

I’ll break down what this is doing:

value.parseHtml()

parse the HTML in each cell (value)

.select(“table.destinations”)

find a table with a class (.) of “destinations” (in the source HTML this reads <table class=”destinations”>. If it was <div id=”statistics”> then you would write .select(“div#statistics”) – the hash sign representing an ‘id’ and the full stop representing a ‘class’.

[0]

This zero in square brackets tells Refine to only grab the first table – a number 1 would indicate the second, and so on. This is because numbering (“indexing”) generally begins with zero in programming.

.select(“tr”)

Now, within that table, find anything within the tag <tr>

.toString()

And convert the results into a string of text.

The results of that expression in the Preview window should look something like this:

<tr> <th></th> <th>Abbotswell School</th> <th>Aberdeen City</th> <th>Scotland</th> </tr> <tr> <th>Percentage of pupils</th> <td>25.5%</td> <td>16.3%</td> <td>22.6%</td> </tr>

This is still HTML, but a much smaller and manageable chunk. You could, if you chose, now export it as a spreadsheet file and use various techniques to get rid of the tags (Find and Replace, for example) and split the data into separate columns (the =SPLIT formula, for example).

Or you could further tweak your GREL code in Refine to drill further into your data, like so:

value.parseHtml().select(“table.destinations”)[0].select(“td”)[0].toString()

Which would give you this:

<td>25.5%</td>

Or you can add the .substring function to strip out the HTML like so (assuming that the data you want is always 5 characters long):

value.parseHtml().select(“table.destinations”)[0].select(“td”)[0].toString().substring(5,10)

When you’re happy, click OK and you should have a new column for that data. You can repeat this for every piece of data you want to extract into a new column.

Then click Export in the upper right corner and save as a CSV or Excel file.

Source: http://onlinejournalismblog.com/2012/01/13/sftw-scraping-data-with-google-refine/

Wednesday 24 June 2015

Data Scraping - What Are Hand-Scraped Hardwood Floors and What Are the Benefits?

If you love the look of hardwood flooring with lots of character, then you may want to check out hand-scraped hardwood flooring. Hand-scraped wood provides a warm vintage look, providing the floor instant character. These types of scraped hardwoods are suitable for living rooms, dining rooms, hallways and bedrooms. But what exactly is hand-scraped hardwood flooring?

Well, it is literally what you think it is. Hand-scraped hardwood flooring is created by hand using specialized wood working tools to make each board unique and giving an overall "old worn" appearance.

At Innovation Builders we offer solid wood floors finished on site with an actual hand-scraping technique followed by stain and sealer. Solid wood floors are installed by an expert team of technicians who work each board with skilled craftsman-like attention to detail. Following the scraping procedure the floor is stained by hand with a customer selected stain color, and then protected with multiple coats of sealing and finishing polyurethane. This finishing process of staining, sealing and coating the wood floors contributes to providing the look and durability of an old reclaimed wood floor, but with today's tough, urethane finishes.

There are many, many benefits to hand-scraped wood flooring. Overall, these floors are extremely durable and hard wearing, providing years of trouble-free use. These wood floors remain looking newer for longer because the texture that the process provides hides the typical dents, dings and scratches that other floors can't hide so easily. That's great news for households with kids, dogs, and cats.

These types of wood flooring have another unique advantage as well. When you do scratch these floors during their lifetime, the scratches are easily repaired. As long as the scratch isn't too deep you can make them practically disappear without ever having to hire a professional. It's simple to hide the scratch by using a color-matched stain marker or repair kit that is readily available through local flooring distributors. These features make hand-scraped hardwood flooring a lot more durable and hassle-free to maintain than other types of wood flooring.

The expert processes utilized in the creation of these floors provides a custom look of worn wood with deep color and subtle highlights. When the light hits the wood at different times during the day, it provides an understated but powerful effect of depth and beauty. They instantly offer your rooms a rustic look full of character, allowing your home to become a warm and inviting environment. The rustic look of this wood provides a texture, style and rustic appeal that cannot be matched by any other type of flooring.

Hand-Scraped Hardwood Flooring is a floor that says welcome and adds a touch of elegance to any home. If you are looking to buy a new home and you haven't had the opportunity to see or feel hand scraped hardwoods, stop in any of the model homes at Innovation Builders in Keller, North Richland Hills or Grand Prairie, Texas and check it out!

Source: http://ezinearticles.com/?What-Are-Hand-Scraped-Hardwood-Floors-and-What-Are-the-Benefits?&id=6026646

Friday 19 June 2015

Web scraping in under 60 seconds: the magic of import.io

Import.io is a very powerful and easy-to-use tool for data extraction that has the aim of getting data from any website in a structured way. It is meant for non-programmers that need data (and for programmers who don’t want to overcomplicate their lives).

I almost forgot!! Apart from everything, it is also a free tool (o_O)

The purpose of this post is to teach you how to scrape a website and make a dataset and/or API in under 60 seconds. Are you ready?

It’s very simple. You just have to go to http://magic.import.io; post the URL of the site you want to scrape, and push the “GET DATA” button. Yes! It is that simple! No plugins, downloads, previous knowledge or registration are necessary. You can do this from any browser; it even works on tablets and smartphones.

For example: if we want to have a table with the information on all items related to Chewbacca on MercadoLibre (a Latin American version of eBay), we just need to go to that site and make a search – then copy and paste the link (http://listado.mercadolibre.com.mx/chewbacca) on Import.io, and push the “GET DATA” button.

You’ll notice that now you have all the information on a table, and all you need to do is remove the columns you don’t need. To do this, just place the mouse pointer on top of the column you want to delete, and an “X” will appear.

Good news for those of us who are a bit more technically-oriented! There is a button that says “GET API” and this one is good to, well, generate an API that will update the data on each request. For this you need to create an account (which is also free of cost).

As you saw, we can scrape any website in under 60 seconds, even if it includes tons of results pages. This truly is magic, no? For more complex things that require logins, entering subwebs, automatized searches, et cetera, there is downloadable import.io software… But I’ll explain that in a different post.

Source: http://schoolofdata.org/2014/12/09/web-scraping-in-under-60-seconds-the-magic-of-import-io/

Monday 8 June 2015

Scraping Services - Assuring Scraping Success with Proxy Data Scraping

Have you ever heard of "Data Scraping?" Data Scraping is the process of collecting useful data that has been placed in the public domain of the internet (private areas too if conditions are met) and storing it in databases or spreadsheets for later use in various applications. Data Scraping technology is not new and many a successful businessman has made his fortune by taking advantage of data scraping technology.

Sometimes website owners may not derive much pleasure from automated harvesting of their data. Webmasters have learned to disallow web scrapers access to their websites by using tools or methods that block certain ip addresses from retrieving website content. Data scrapers are left with the choice to either target a different website, or to move the harvesting script from computer to computer using a different IP address each time and extract as much data as possible until all of the scraper's computers are eventually blocked.

Thankfully there is a modern solution to this problem. Proxy Data Scraping technology solves the problem by using proxy IP addresses. Every time your data scraping program executes an extraction from a website, the website thinks it is coming from a different IP address. To the website owner, proxy data scraping simply looks like a short period of increased traffic from all around the world. They have very limited and tedious ways of blocking such a script but more importantly -- most of the time, they simply won't know they are being scraped.

You may now be asking yourself, "Where can I get Proxy Data Scraping Technology for my project?" The "do-it-yourself" solution is, rather unfortunately, not simple at all. Setting up a proxy data scraping network takes a lot of time and requires that you either own a bunch of IP addresses and suitable servers to be used as proxies, not to mention the IT guru you need to get everything configured properly. You could consider renting proxy servers from select hosting providers, but that option tends to be quite pricey but arguably better than the alternative: dangerous and unreliable (but free) public proxy servers.

There are literally thousands of free proxy servers located around the globe that are simple enough to use. The trick however is finding them. Many sites list hundreds of servers, but locating one that is working, open, and supports the type of protocols you need can be a lesson in persistence, trial, and error. However if you do succeed in discovering a pool of working public proxies, there are still inherent dangers of using them. First off, you don't know who the server belongs to or what activities are going on elsewhere on the server. Sending sensitive requests or data through a public proxy is a bad idea. It is fairly easy for a proxy server to capture any information you send through it or that it sends back to you. If you choose the public proxy method, make sure you never send any transaction through that might compromise you or anyone else in case disreputable people are made aware of the data.

A less risky scenario for proxy data scraping is to rent a rotating proxy connection that cycles through a large number of private IP addresses. There are several of these companies available that claim to delete all web traffic logs which allows you to anonymously harvest the web with minimal threat of reprisal. Companies such as offer large scale anonymous proxy solutions, but often carry a fairly hefty setup fee to get you going.

The other advantage is that companies who own such networks can often help you design and implementation of a custom proxy data scraping program instead of trying to work with a generic scraping bot. After performing a simple Google search, I quickly found one company (www.ScrapeGoat.com) that provides anonymous proxy server access for data scraping purposes. Or, according to their website, if you want to make your life even easier, ScrapeGoat can extract the data for you and deliver it in a variety of different formats often before you could even finish configuring your off the shelf data scraping program.

Whichever path you choose for your proxy data scraping needs, don't let a few simple tricks thwart you from accessing all the wonderful information stored on the world wide web!

Source: http://ezinearticles.com/?Assuring-Scraping-Success-with-Proxy-Data-Scraping&id=248993

Tuesday 2 June 2015

On-line directory tree webscraping

As you surf around the internet — particularly in the old days — you may have seen web-pages like this:

The former image is generated by Apache SVN server, and the latter is the plain directory view generated for UserDir on Apache.

In both cases you have a very primitive page that allows you to surf up and down the directory tree of the resource (either the SVN repository or a directory file system) and select links to resources that correspond to particular files.

Now, a file system can be thought of as a simple key-value store for these resources burdened by an awkward set of conventions for listing the keys where you keep being obstructed by the ‘/‘ character.

My objective is to provide a module that makes it easy to iterate through these directory trees and produce a flat table with the following helpful entries:

Although there is clearly redundant data between the fields url, abspath, fname, name, ext, having them in there makes it much easier to build a useful front end.

The function code (which I won’t copy in here) is at https://scraperwiki.com/scrapers/apache_directory_tree_extractor/. This contains the functions ParseSVNRevPage(url) and ParseSVNRevPageTree(url), both of which return dicts of the form:

{'url', 'rev', 'dirname', 'svnrepo',

 'contents':[{'url', 'abspath', 'fname', 'name', 'ext'}]}

I haven’t written the code for parsing the Apache Directory view yet, but for now we have something we can use.

I scraped the UK Cave Data Registry with this scraper which simply applies the ParseSVNRevPageTree() function to each of the links and glues the output into a flat array before saving it:

lrdata = ParseSVNRevPageTree(href)

ldata = [ ]

for cres in lrdata["contents"]:

    cres["svnrepo"], cres["rev"] = lrdata["svnrepo"], lrdata["rev"]

    ldata.append(cres)

scraperwiki.sqlite.save(["svnrepo", "rev", "abspath"], ldata)

Now that we have a large table of links, we can make the cave image file viewer based on the query:

select abspath, url, svnrepo from swdata where ext=’.jpg’ order by abspath limit 500

By clicking on a reference to a jpg resource on the left, you can preview what it looks like on the right.

If you want to know why the page is muddy, a video of the conditions in which the data was gathered is here.

Image files are usually the most immediately interesting out of any unknown file system dump. And they can be made more interesting by associating meta-data with them (given that no convention for including interesting information in the EXIF sections of their file formats). This meta-data might be floating around in other files dumped into the same repository — eg in the form of links to them from html pages which relate to picture captions.

But that is a future scraping project for another time.

Source: https://scraperwiki.wordpress.com/2012/09/14/on-line-directory-tree-webscraping/

Thursday 28 May 2015

Data Scraping Services - Things to take care while doing Web Scraping!!!

In the present day and age, web scraping word becomes most popular in data science. Basically web scraping is extracting the information from the websites using pre-written programs and web scraping scripts. Many organizations have successfully used web site scraping to build relevant and useful database that they use on a daily basis to enhance their business interests. This is the age of the Big Data and web scraping is one of the trending techniques in the data science.

Throughout my journey of learning web scraping and implementing many successful scraping projects, I have come across some great experiences we can learn from.  In this post, I’m going to discuss some of the approaches to take and approaches to avoid while executing web scraping.

User Proxies: Anonymously scraping data from websites

One should not scrape website with a single IP Address. Because when you repeatedly request the web page for web scraping, there is a chance that the remote web server might block your IP address preventing further request to the web page. To overcome this situation, one should scrape websites with the help of proxy servers (anonymous scraping). This will minimize the risk of getting trapped and blacklisted by a website. Use of Proxies to hide your identity (network details) to remote web servers while scraping data. You may also use a VPN instead of proxies to anonymously scrape websites.

Take maximum data and store it.

Do not follow “process the web page as it comes from the remote server”. Instead take all the information and store it to disk. This approach will be useful when your scraping algorithm breaks in the middle. In this case you don’t have to start scraping again. Never download the same content more than once as you are just wasting bandwidth. Try and download all content to disk in one go and then do the processing.

Follow strict rules in parsing:

Check various rules while parsing the information from the web site. For example if you expect a value to be a date then check that it’s really a date. This may greatly improve the quality of information. When you get unexpected data, then the algorithm need to be changed accordingly.

Respect Robots.txt

Robots.txt specifies the set of rules that should be followed by web crawlers and robots. I strongly advise you to consider and adjust your crawler to fully respect robots.txt. Robots.txt contains instructions on the exact pages that you are allowed to crawl, user-agent, and the requisite intervals between page requests. Following to these instructions minimizes the chance of getting blacklisted and banned from website owner.

Use XPath Smartly

XPath is a nice option to select elements of the HTML document more flexibly than CSS Selectors.  Be careful about HTML structure change through page to page so one xpath you made may be failed to extract data on another page due to changes in HTML structure.

Obey Website TOC:

Some websites make it absolutely apparent in their terms and conditions that they are particularly against to web scraping activities on their content. This can make you vulnerable against possible ethical and legal implications.

Test sample scrape and verify the data with actual scrape

Once you are done with web scraping project set up, you need to test it for sometimes. Check the extracted data. If something is not good, find out the cause and make changes accordingly and finally come to a perfect web scraping project.

Source: http://webdata-scraping.com/things-take-care-web-scraping/

Tuesday 26 May 2015

Data Mining Services

Data Minng Services, through its data mining services can mine required data for you from any of the available sources. Over the years, we have successfully catered to wide variety of outsource data mining requirements, which specifies our competency in dealing with your data mining requirements.

Based on your requirements, we can mine data from your preferred data sources, or we will use our own reliable sources to mine the data required by you. We have been using automated as well manual data mining strategies to deliver superior data mining services.

Types of data mining services delivered by us

With an extensive variety of data mining services provided by us, you will definitely be able to find the most perfect service package to cater to your requirements. Below listed are just some of the data mining services offered by us:

•    Web data mining
•    Data extraction
•    Data capture
•    Data gathering
•    Collection of required data
•    Validation of data

Outsource data mining requirements to us, and we are sure that the data mining India unit of Hi-Tech BPO Services will be able to formulate the most appropriate and cost effective solutions to include your entire requirements.

Highlights of our data mining services:

•    Most affordable rates
•    Dedicated data mining India unit
•    Latest data mining technologies used to mine all required data
•    Data will be mined, gathered, processed and validated as per your requirements
•    Mined data can be directly included into your database

Competitive advantage of using our data mining services

To mine accurate and relevant data, some level of internet knowledge is essential. And it would also consume a lot of your valuable time. With our data mining services, we will take care of all your data mining tasks, while you look after your business and its core functions.

The affordably priced data mining services delivered by the data mining India unit will also help you to save considerable amount of your money, which you can put into more productive purposes.

Source: http://www.hitechbposervices.com/data-mining.php

Monday 25 May 2015

Data Scraping - One application or multiple?

I have 30+ sources of data I scrape daily in various formats (xml, html, csv). Over the last three years Ive built 20 or so c# console applications that go out, download the data and re-format it into a database. But Im curious what other people are doing for this type of task. Are people building one tool that has a lot of variables and inputs or are people designing 20+ programs to scrape and parse this data. Everything is hard-coded into each console and run through Windows Task Manager.

Added a couple additional thoughts/details:

    Of the 30 sources, they all have unique properties, all are uploaded into individual MySQL tables and all have varying frequencies. For example, one data source is hit once a minute, another on 5 minute intervals. Majority are once an hour and once a day.

At current I download the formats (xml, csv, html), parse them into a formatted csv and put them into staging folders. Within that folder, I run an application that reads a config file specific to the folder. When a new csv is added to the folder, the application then uploads the data into the specific MySQL tables designated in the config file.

Im wondering if it is worth re-building all this into a larger complex program that is more capable of dynamically adding content+scrapes and adjusting to format changes.

Looking for outside thoughts.

5 Answers

What you are working on is basically ETL. So at a high level you need an export component (get stuff) a transform component (map to known format) and a load (take known format and put stuff somewhere). If you are comfortable being tied to a RDBMS you could use something like SQL Server SSIS packages. What I would do is create a host application that managed common aspects of the overall process (errors, and pipeline processing). Then make the specifics of the E, T, and L pluggable. A low ceremony way to get this would be to host the powershell runtime and create each seesion with common context objects that the scripts will use to communicate. You get a built in pipe and filter model for scripts and easy, safe extensibility. This design has worked extremely for my team with a similar situation.

Resist the temptation to rewrite.

However, for new code, you could plan for what you know has already happened. Write a retrieval mechanism that you can reuse through configuration. Write a translation mechanism that you can reuse (maybe in a library that you can call with very little code). Write a saving mechanism that can be called or configured.

At this point, you've written #21(+). Now, the following ones can be handled with a tiny bit of code and configuration. Yay!

(You may want to implement this in a service that handles multiple conversions, but weight the benefits of it versus the ability to separate errors in one module from the rest.)

1

It depends - if you need the scrapers to feed into a single application/database and have a uniform data format, it makes sense to have them all in a single program (possibly inheriting from a common base scraper).

If not and they are completely unrelated to each other, might as well keep them separate so changes in one have no effect on another.

Update, following edits to question:

Don't change things just for the sake of change. You have something that works, don't mess with it too much.

Since your data sources and data sinks are all separate from each other, combining them into one application will simply create a very complicated application that will be very difficult to change when needed.

Since the scrapers are separate, keep the separation as you have it now.

As sbrenton said, this most falls in with ETL. You should check out Talend Open Studio. It specializes in handling data flows like I imagine yours are as well as other things like duplicate removal, normalization of fields; tens/hundreds of drag and drop ETL components, you can also write custom code as Talend is a code generator as well, either Java or Perl are options. You can also use Talend to execute system commands. I use it for my ETL work, although not in production, in production we will use SSIS, mostly due to lots of other Microsoft products in house.

You may want to use some good scheduling library, like Quartz.NET.

In a few words, here's what you can expect:

    Your tasks are represented by classes and not processes

    You can set and forget tasks and scale across multiple servers

    You have an out-of-the-box system to actually take care of what is needed to be run when, what failed and needs to be re-run, etc. etc.

Source: http://programmers.stackexchange.com/questions/118077/data-scraping-one-application-or-multiple/118098#118098


Friday 22 May 2015

How to prevent getting blacklisted while scraping

Crawlers can retrieve data much quicker and in greater depth than human searchers, so bad scraping practices can have some impact on the performance of the site.

Needless to say, if a single crawler is performing multiple requests per second and/or downloading large files, a under powered server would have a hard time keeping up with requests from multiple crawlers.

Since spiders don’t bring direct organic traffic and seemingly affect the performance of the site, most site admins hate spiders and do their best to prevent them.

Lets go through how websites detect and block spiders and also know the techniques to overcome those barriers.

Most websites don’t have anti scraping mechanisms since it would affect the user experience, but some sites do not believe in open data access.

Before going through this article always keep in mind that

    A GOOD SPIDER MUST OBEY A WEBSITE’S CRAWLING POLICIES.

HOW DOES DETECTING ‘SPIDER ACTIVITY’ WORK?

A web server can use different mechanisms to detect a spider from a normal user. Here are some methods used by a site to detect a spider:

•    Unusual traffic/high download rate especially from a single client/or IP address within a short time span raises a bot alert.

•    Repetitive tasks done on website based on an assumption that a human user won’t perform the same repetitive tasks all the time.

•    The site has honeypot traps inside their pages, these honeypots are usually links which aren’t visible to a normal user but only to a spider . When a scraper/spider tries to access the link, the alarms are tripped.

Spend some time and investigate the anti-scraping mechanisms used by a site and build the spider accordingly, it will provide a better outcome in the long run and increase the longevity and robustness of your work.

EASIEST WAY TO FIND IF A SITE HATES BOTS

Check the robots.txt file if it contains line like these, It means the site doesn’t like bots. However, since most sites want to be on Google (arguably the largest scraper of websites globally ;-)) they do allow access to bots and spiders.

User-agent: *
Disallow: /

This line is for preventing well-behaved bots or the bots which respect robots.txt.

Another way is CAPTCHAs irritating presence in the sites other than in authentication page.

WHAT HAPPENS WHEN YOU GET BANNED

There are two ways to ban a webspider, either by banning all accesses from a particular IP or by banning all accesses that use a specific id to access the server (most browsers and web spiders identify themselves whenever they request a page by user agents. Chrome browser for example uses Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.149 Safari/537.36

The banning can be temporary or permanent. Temporary blocks can last minutes or hours.

HOW DO WE KNOW A SITE HAS BLOCKED US?

If any of the following symptoms appear on the site that you are crawling, it is a sign of being blocked or banned.

•    Showing CAPTCHA pages
•    Unusual content delivery delay
•    Frequent response with 404,301,500 errors,

also frequent appearance of these status codes are also indication of blocking.

•    401 Unauthorized
•    403 Forbidden
•    404 Not Found
•    408 Request Timeout
•    429 Too Many Requests

WEB CRAWLING BEST PRACTICES

These are the best practices we can follow to overcome the detection.

1. MAKE CRAWLING SLOWER, DO NOT DDoS THE SERVER, TREAT THEM NICELY

Use auto throttling mechanisms, which will automatically throttle crawling speed based on the load on both spider and the website, you are crawling and also adjust the spider to optimum crawling speed. The faster you crawl, the worse it is for everyone.

Put some random sleeps in between requests, Add some delays after crawled number of pages. Choose the lowest number of concurrent requests possible. These techniques make the spider looks like a human being.

2. DISGUISE YOUR REQUESTS BY ROTATING IP/PROXY

A server can easily detects a bot by checking the requests from a single IP address, So we use different IPs for making request to a server and detection rate become lesser. Make a pool of IPs that you can use and use random ones for each request.

There are several methods can be used to change the IP. Services like VPN ,shared proxies, TOR can help  and some third parties are also provides services for IP rotation.

3. USER-AGENT SPOOFING

Since every request made from a client end contains a user-agent header ,Using the same useragent multiple times leads to the detection of a bot. User agent spoofing is the best solution for this. Spoof the User agent by making a list of user agents and pick a random one for each request.

Websites do not want to block genuine users so you should try to look like one. Set your user-agent to a common web browser instead of using the library default (such as wget/version or urllib/version). You could even pretend to be the Google Bot: Googlebot/2.1; (http://www.google.com/bot.html)

You can check your user-agent string here:

http://www.whatsmyuseragent.com/

A good user-agent string list can be found here:

http://www.useragentstring.com/pages/useragentstring.php

4. BE AWARE OF HONEYPOTS

Some site designers put honeypot traps inside websites to detect web spiders, They may be links that normal user can’t see and a spider can.

When following links always take care that the link has proper visibility with no nofollow tag. Some honeypot links to detect spiders will be have the CSS style display:none or will be color disguised to blend in with the page’s background color.

5. DO NOT ALWAYS FOLLOW THE SAME CRAWLING PATTERN

Only robots follow the same crawling pattern,Sites that have intelligent anti-crawling mechanisms can easily detect spiders from finding pattern in their actions. Humans wont perform repetitive tasks a lot of times. Incorporate some random clicks on the page, mouse movements and random actions that will make a spider looks like a human client.

6. ALWAYS RESPECT THE robots.txt

All web spiders are supposed to follow rules that you place in a robots.txt file in a website, such as how frequently they are allowed to request pages, and from what directories they are allowed to crawl through. They should also be supplying a consistent valid User-Agent string that identifies the requests as a bot request.

Source: http://learn.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/

Tuesday 19 May 2015

The Features of the "Holographic Meridian Scraping Therapy"

1. Systematic nature: Brief introduction to the knowledge of viscera, meridians and points in traditional Chinese medicine, theory of holographic diagnosis and treatment; preliminary discussion of the treatment and health care mechanism of scraping therapy; systemat­ic introduction to the concrete methods of the holographic meridian scraping therapy; enumerating a host of therapeutic methods of scraping for disorders in both Chinese and Western medicine to em­body a combination of disease differentiation and syndrome differen­tiation; and summarizing the health care scraping methods. It is a practical handbook of gua sha.

2. Scientific: Applying the theories of Chinese and Western medicine to explain the health care and treatment mechanism and clinical applications of scraping therapy; introducing in detail the practical manipulations, items for attention, and indications and contraindications of the scraping therapy. Here are introduced repre­sentative diseases in different clinical departments, for which scrap­ing therapy has a better curative effect and the therapeutic methods of scraping for these diseases. Stress is placed on disease differentia­tion in Western medicine and syndrome differentiation in Chinese medicine, which should be combined in practical application.

Although there are more than 140,000 kinds of disease known to modem medicine, all diseases are related to dysfunction of the 14 meridians and internal organs, according to traditional Chinese med­icine. The object of scraping therapy is to correct the disharmony in the meridians and internal organs to recover the normal bodily func­tions. Thus, the scraping of a set of meridian points can be used to treat many diseases. In the section on clinical application only about 100 kinds of common diseases are discussed, although the actual number is much more than that. For easy reference the "Index of Diseases and Symptoms" is appended at the back of the book.

3. Practical: Using simple language and plenty of pictures and diagrams to guarantee that readers can easily leam, memorize and apply the principles of scraping therapy. As long as they master the methods explained in Chapter Three, readers without any medical knowledge can apply scraping therapy to themselves or others, with reference to the pictures in Chapters Four and Five. Besides scraping therapy, herbal treatment for each disease or syndrome is explained and may be used in combination with the scraping techniques.

Referring to the Holographic Meridian Hand Diagnosis and pic­tures at the back of the book will enhance accuracy of diagnosis and increase the effectiveness of scraping therapy.

Since the first publication and distribution of the Chinese edition of the book in July 1995, it has been welcomed by both medical specialists and lay people. In March 1996 this book was republished and adopted as a textbook by the School for Advanced Studies of Traditional Chinese Medicine affiliated to the Institute of the Acu­puncture and Moxibustion of the China Academy of Traditional Chi­nese Medicine.

In order to bring this health care method to more and more peo­ple and to make traditional Chinese medicine better appreciated They have modified and replenished this book in the spirit of constant im­provement. They hope that they may make a contribution to the health care of mankind with this natural therapy which has no side-effects and causes no pollution.

They hope that the Holographic Meridian Scraping Therapy can help the health and happiness of more and more families in the world.

Source: http://ezinearticles.com/?The-Features-of-the-Holographic-Meridian-Scraping-Therapy&id=5005031

Wednesday 6 May 2015

Web Scraping Services Are Important Tools For Knowledge

Data extraction and web scraping techniques are important tools to find relevant data and information for personal or business use. Many companies, self-employed to copy and paste data from web pages. This process is very reliable, but very expensive as it is a waste of time and effort to get results. This is because the data collected and spent less resources and time required to collect these data are compared.

At present, several mining companies and their websites effective web scraping technique specifically for the thousands of pages of information developed culture can be traced. The information from a CSV file, database, XML file, or any other source with the required format is alameda. understanding of correlations and patterns in the data, so that policies can be designed to assist decision making. The information can also be stored for future reference.

The following are some common examples of data extraction process:

In order to rule through a government portal, citizens who are reliable for a given survey name removed.

Competitive pricing and data products include scraping websites

To access the web site or web design Stock download the videos and photos of scratching

Automatic Data Collection

It regularly collects data on a regular basis. Automated data collection techniques are very important because they find the company’s customer trends and market trends to help. By determining market trends, it is possible to understand customer behavior and predict the likelihood of the data will change.

The following are some examples of automated data collection:

Monitoring of special hourly rates for stocks

collects daily mortgage rates from various financial institutions

on a regular basis is necessary to check the weather

By using web scraping services, you can extract all data related to your business. Then analyzed the data to a spreadsheet or database can be downloaded and compared. Storing data in a database or in a required format and interpretation of the correlations to understand and makes it easier to identify hidden patterns.

Data extraction services, it is possible pricing, email, databases, profile data, and consistently to competitors for information about the data. Different techniques and processes designed to collect and analyze data, and has developed over time. Web Scraping for business processes that have beaten the market recently is one. It is a process from various sources such as websites and databases with large amounts of data provides.

Some of the most common methods used to scrape web crawling, text, fun, DOM analysis and include matching expression. After the process is only analyzers, HTML pages or meaning can be achieved through annotations. There are many different ways of scaling data, but more importantly is working toward the same goal. The main purpose of using web scraping service to retrieve and compile data in databases and web sites. In the business world is to remain relevant to the business process.

The central question about the relevance of web scraping contact. The process is relevant to the business world? The answer is yes. The fact that it is used by large companies in the world and many awards speaks derivatives.

Source: http://www.selfgrowth.com/articles/web-scraping-services-are-important-tools-for-knowledge

Tuesday 28 April 2015

A Guide to Web Scraping Tools

Web Scrapers are tools designed to extract / gather data in a website via crawling engine usually made in Java, Python, Ruby and other programming languages.Web Scrapers are also called as Web Data Extractor, Data Harvester , Crawler and so on which most of them are web-based or can be installed in local desktops.

Its main purpose is to enable webmasters, bloggers, journalist and virtual assistants to harvest data from a certain website whether text, numbers, contact details and images in a structured way which cannot be done easily thru manual copy and paste method. Typically, it transforms the unstructured data on the web, from HTML format into a structured data stored in a local database or spreadsheet or automates web human browsing.

Web Scraper Usage

Web Scrapers are also being used by SEO and Online Marketing Analyst to pull out some data privately from the competitor’s website such as high targeted keywords, valuable links, emails & traffic sources that were also perform by SEOClerk, Google and many other web crawling sites.

Includes:

•    Price comparison
•    Weather data monitoring
•    Website change detection
•    Research
•    Web mash up
•    Info graphics
•    Web data integration
•    Web Indexing & rank checking
•    Analyze websites quality links

List of Popular Web Scrapers

There are hundreds of Web Scrapers today available for both commercial and personal use. If you’ve never done any web scraping before, there are basic

Web scraping tools like YahooPipes, Google Web Scrapers and Outwit Firefox extensions that it’s good to start with but if you need something more flexible and has extra functionality then,  check out the following:

HarvestMan [ Free Open Source]

HarvestMan is a web crawler application written in the Python programming language. HarvestMan can be used to download files from websites, according to a number of user-specified rules. The latest version of HarvestMan supports as much as 60 plus customization options. HarvestMan is a console (command-line) application. HarvestMan is the only open source, multithreaded web-crawler program written in the Python language. HarvestMan is released under the GNU General Public License.Like Scrapy, HarvestMan is truly flexible however, your first installation would not be easy.

Scraperwiki [Commercial]

Using a minimal programming you will be able to extract anything. Off course, you can also request a private scraper if there’s an exclusive in there you want to protect. In other words, it’s a marketplace for data scraping.

Scraperwiki is a site that encourages programmers, journalists and anyone else to take online information and turn it into legitimate datasets. It’s a great resource for learning how to do your own “real” scrapes using Ruby, Python or PHP. But it’s also a good way to cheat the system a little bit. You can search the existing scrapes to see if your target website has already been done. But there’s another cool feature where you can request new scrapers be built.  All in all, a fantastic tool for learning more about scraping and getting the desired results while sharpening your own skills.

Best use: Request help with a scrape, or find a similar scrape to adapt for your purposes.

FiveFilters.org [Commercial]   

Is an online web scraper available for commercial use. Provides easy content extraction using Full-Text RSS tool which can identify and extract web content (news articles, blog posts, Wikipedia entries, and more) and return it in an easy to parse format. Advantages; speedy article extraction, Multi-page support, has a Autodetection and  you can deploy  on the cloud server without database required.

Kimono

Produced by Kimono labs this tool lets you convert data to into apis for automated export.   Benjamin Spiegel did a great Youmoz post on how to build a custom ranking tool with Kimono, well worth checking out!

Mozenda [Commercial]

This is a unique tool for web data extraction or web scarping.Designed for easiest and fastest way of getting data from the web for everyone. It has a point & click interface and with the power of the cloud you can scrape, store, and manage your data all with Mozenda’s incredible back-end hardware. More advance, you can automate your data extraction leaving without a trace using Mozenda’s  anonymous proxy feature that could rotate tons of IP’s .

Need that data on a schedule? Every day? Each hour? Mozenda takes the hassle out of automating and publishing extracted data. Tell Mozenda what data you want once, and then get it however frequently you need it. Plus it allows advanced programming using REST API the user can connect directly Mozenda account.

Mozenda’s Data Mining Software is packed full of useful applications especially for sales people. You can do things such as “lead generation, forecasting, acquiring information for establishing budgets, competitor pricing analysis. This software is a great companion for marketing plan & sales plan creating.

Using Refine Capture tetx tool, Mozenda is smart enough to filter the text you want stays clean or get  the specific text or split them into pieces.

80Legs [Commercial]

The first time I heard about 80Legs my mind really got confused of what really this software does. 80Legs like Mozenda is a web-based data extraction  tool with customizable features:

•    Select which websites to crawl by entering URLs or uploading a seed list
•    Specify what data to extract by using a pre-built extractor or creating your own
•    Run a directed or general web crawler
•    Select how many web pages you want to crawl
•    Choose specific file types to analyze

80 legs offers customized web crawling that lets you get very specific about your crawling parameters, which tell 80legs what web pages you want to crawl and what data to collect from those web pages and also the general web crawling which can collect data like web page content, outgoing links and other data. Large web crawls take advantage of 80legs’ ability to run massively parallel crawls.

Also crawls data feeds and offers web extraction design services. (No installation needed)

ScrapeBox [Commercial]

ScrapeBox are most popular web scraping tools to SEO experts, online marketers and even spammers with its very user-friendly interface you can easily harvest data from a website;

•    Grab Emails
•    Check page rank
•    Checked high value backlinks
•    Export URLS
•    Checked Index
•    Verify working proxies
•    Powerful RSS Submission

Using thousands of rotating proxies you will be able to sneak on the competitor’s site keywords, do research on .gov sites, harvesting data, and commenting without getting blocked.

The latest updates allow the users to spin comments and anchor text to avoid getting detected by search engines.

You can also check out my guide to using Scrapebox for finding guest posting opportunities:

Scrape.it [Commercial]

Using a simple point & click Chrome Extension tool, you can extract data from websites that render in javascript. You can automate filling out forms, extract data from popups, navigate and crawl links across multiple pages, extract images from even the most complex websites with very little learning curve. Schedule jobs to run at regular intervals.

When a website changes layout or your web scraper stops working, scrape.it  will fix it automatically so that you can continue to receive data uninterrupted and without the need for you to recreate or edit it yourself.

They work with enterprises using our own tool that we built to deliver fully managed solutions for competitive pricing analysis, business intelligence, market research, lead generation, process automation and compliance & risk management requirements.

Features:

    Very easy web date extraction with Windows like Explorer interface

    Allowing you to extract text, images and files from modern Web 2.0 and HTML5 websites which uses Javascript & AJAX.

    The user could select what features they’re going to pay with

    lifetime upgrade and support at no extra charge on premium license

Scrapy [Free Open Source]

Off course the list would not be cool without Scrapy, it is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

Features:

•         Design with simplicity- Just writes the rules to extract the data from web pages and let Scrapy crawl the entire web site. It can crawl 500 retailers’ sites daily.

•         Ability to attach new code for extensibility without having to touch the framework core

•         Portable, open-source, 100% Python- Scrapy is completely written in Python and runs on Linux, Windows, Mac and BSD

•         Scrapy comes with lots of functionality built in.

•         Scrapy is extensively documented and has an comprehensive test suite with very good code coverage

•         Good community and commercial support

 Cons: The installation process is hard to perfect especially for beginners

Needlebase [Commercial]

Many organizations, from private companies to government agencies, store their info in a searchable database that requires you navigate a list page listing results, and a detail page with more information about each result.  Grabbing all this information could result in thousands of clicks, but as long as it fits the same formula, Needlebase can do it for you.  Point and click on example data from one page once to show Needlebase how your site is structured, and it will use that pattern to extract the information you’re looking for into a dataset.  You can query the data through Needle’s site, or you can output it as a CSV or other file format of your choice.  Needlebase can also rerun your scraper every day to continuously update your dataset.

OutwitHub [Free]

This Firefox extension is one of the more robust free products that exists Write your own formula to help it find information you’re looking for, or just tell it to download all the PDFs listed on a given page.  It will suggest certain pieces of information it can extract easily, but it’s flexible enough for you to be very specific in directing it.  The documentation for Outwit is especially well written, they even have a number of tutorials for what you might be looking to do.  So if you can’t easily figure out how to accomplish what you want, investing a little time to push it further can go a long way.

Best use: more text

irobotsoft [Free}

This is a free program that is essentially a GUI for web scraping. There’s a pretty steep learning curve to figure out how to work it, and the documentation appears to reference an old version of the software. It’s the latest in a long tradition of tools that lets a user click through the logic of web scraping. Generally, these are a good way to wrap your head around the moving parts of a scrape, but the products have drawbacks of their own that makes them little easier than doing the same thing with scripts.

Cons: The documentation seems outdated

Best use: Slightly complex scrapes involving multiple layers.

iMacros [Free]

The  same ethos on how microsoft macros works, iMacros automates repetitive task.Whether you choose the website, Firefox extension, or Internet Explorer add-on flavor of this tool, it can automate navigating through the structure of a website to get to the piece of info you care about. Record your actions once, navigating to a specific page, and entering a search term or username where appropriate.  Especially useful for navigating to a specific stock you care about, or campaign contribution data that’s mired deep in an agency website and lacks a unique Web address.  Extract that key piece (pieces) of info into a usable form.  Can also help convert Web tables into usable data, but OutwitHub is really more suited to that purpose.  Helpful video and text tutorials enable you to get up to speed quickly.

Best use: Eliminate repetition in navigating to a particular datapoint in a website that you’re checking up on often by recording a repeatable action that pulls the datapoint out of the clutter it’s naturally surrounded by.

InfoExtractor [Commercial]

This is a neat little web service that generates all sorts of information given a list of urls. Currently, it only works for YouTube video pages, YouTube user profile pages, Wikipedia entries, Huffingtonpost posts, Blogcatalog blog posts and The Heritage Foundation blog (The Foundry). Given a url, the tool will return structured information including title, tags, view count, comments and so on.

Google Web Scraper [Free]

A browser-based web scraper works like Firefox’s Outwit Hub, it’s designed for plain text extraction from any online pages and export to spreadsheets via Google docs. Google Web Scraper can be downloaded as an extension and you can install it in your Chrome browser without seconds. To use it: highlight a part of the webpage you’d like to scrape, right-click and choose “Scrape similar…”. Anything that’s similar to what you highlighted will be rendered in a table ready for export, compatible with Google Docs™. The latest version still had some bugs on spreadsheets.

Cons: It doesn’t work for images and sometimes it can’t perform well on huge volume of text but it’s easy and fast to use.


Tutorials:

Scraping Website Images Manually using Google Inspect Elements

The main purpose of Google Inspect Elements is for debugging like the Firefox Firebug however, if you’re flexible you can use this tool also for harvesting images in a website. Your main goal is to get the specific images like web backgrounds, buttons, banners, header images and product images which is very useful for web designers.

Now, this is a very easy task. First, you will definitely need to download and install the Google Chrome browser in your computer. After the installation do the following:

1. Open the desired webpage in Google Chrome

2. Highlight any part of the website and right click > choose Google Inspect Elements

3. In the Google Inspect Elements, go to Resources tab

4. Under Resources tab, expand all folders. You will eventually see script folders and IMAGES folders

5. In the Images folders, just use arrow keys to find the images you need to have (see the screenshot above)

6. Next, right click the images and choose Open the Image in New Tab

7. Finally, right click the image > choose Save Image As… . (save to your local folder)

You’re done!

How to Extract Links from a Web Page with OutWit Hub

In this tutorial we are going to learn how to extract links from a webpage with OutWit Hub.

Sometimes it can be useful to extract all links from a given web page. OutWit Hub is the easiest way to achieve this goal.

1. Launch OutWit Hub

If you haven’t installed OutWit Hub yet, please refer to the Getting Started with OutWit Hub tutorial.

Begin by launching OutWit Hub from Firefox. Open Firefox then click on the OutWit Button in the toolbar.

If the icon is not visible go to the menu bar and select Tools -> OutWit -> OutWit Hub

OutWit Hub will open displaying the Web page currently loaded on Firefox.


2. Go to the Desired Web Page

In the address bar, type the URL of the Website.

Go to the Page view where you can see the Web page as it would appear in a traditional browser.

Now, select “Links” from the view list.

In the “Links” widget, OutWit Hub displays all the links from the current page.

If you want to export results to Excel, just select all links using ctrl/cmd + A, then copy using ctrl/cmd + C and paste it in Excel (ctrl/cmd + V).

Source: http://www.garethjames.net/a-guide-to-web-scrapping-tools/

Wednesday 22 April 2015

Hand Scraped Versus Machine Scraped Floors - The Distinction

In society today hardwood flooring has become the new must have. The days of carpet are gone, and if you have looked into bringing your home up to date with the styling of today you will have noticed by now that there are many different options. At times this may become very overwhelming, especially if you are not a hardwood specialist like most people are not. That is why this article is here to help you understand the many different options available to you.

The flooring type covered in this article is hand scraped flooring. This flooring type is a custom look flooring that is in very high demand in flooring marketplace, which is understandable because it is probably the most unique flooring there is. You can choose from many different types of wood species such as oak, maple, hickory, and most exotic species. There is computerized hand scraped that is when the manufacturer makes one piece of wood and places it into a computer that will cut thousands of different wood types with that one design. This type of process is also known as machine scraping. Hardwood floors employing this type of technology usually cost less, but most of the pieces look the same because the hand scraping is done by a machine.

Then you have actual hand scraped flooring that is done all by hand and takes more time and effort than machine scraped. This flooring is made custom each individual piece is scraped and notched in different ways, so every piece is unique. If you decide to purchase actual hand scraped flooring it will cost you more than mass produced computerized version but it will definitely be the more unique option. If you are the type of person who wants to have a one of kind floor then an actual hand scraped floor is the way to go.

So in conclusion hand scraped flooring is a great option for a lot of people. It comes in several different wood types, and several different colors. You can find flooring options for every budget and to meet every style. If having a custom floor in your home it may be important or not important on whether it be computer or done by hand. Most consumers cannot tell the difference between actual hand scraped flooring and machine scraped when just looking at a small sample. So when shopping at your local retailer ask the tough questions and find out if the manufacturer uses machine or authentic hand scrapping on their products.

To view your many options on hand scraped flooring please check out our website that covers all hardwood flooring options.

Source: http://ezinearticles.com/?Hand-Scraped-Versus-Machine-Scraped-Floors---The-Distinction&id=4151157

Tuesday 7 April 2015

rvest: easy web scraping with R

rvest is new package that makes it easy to scrape (or harvest) data from html web pages, inspired by libraries like beautiful soup. It is designed to work with magrittr so that you can express complex operations as elegant pipelines composed of simple, easily understood pieces. Install it with:

install.packages("rvest")

rvest in action

To see rvest in action, imagine we’d like to scrape some information about The Lego Movie from IMDB. We start by downloading and parsing the file with html():

library(rvest)

lego_movie <- html("http://www.imdb.com/title/tt1490017/")

To extract the rating, we start with selectorgadget to figure out which css selector matches the data we want: strong span. (If you haven’t heard of selectorgadget, make sure to read vignette("selectorgadget") – it’s the easiest way to determine which selector extracts the data that you’re interested in.) We use html_node() to find the first node that matches that selector, extract its contents with html_text(), and convert it to numeric with as.numeric():

lego_movie %>%

  html_node("strong span") %>%

  html_text() %>%

  as.numeric()

#> [1] 7.9

We use a similar process to extract the cast, using html_nodes() to find all nodes that match the selector:

lego_movie %>%

  html_nodes("#titleCast .itemprop span") %>%

  html_text()

#>  [1] "Will Arnett"     "Elizabeth Banks" "Craig Berry"   

#>  [4] "Alison Brie"     "David Burrows"   "Anthony Daniels"

#>  [7] "Charlie Day"     "Amanda Farinos"  "Keith Ferguson"

#> [10] "Will Ferrell"    "Will Forte"      "Dave Franco"   

#> [13] "Morgan Freeman"  "Todd Hansen"     "Jonah Hill"

The titles and authors of recent message board postings are stored in a the third table on the page. We can use html_node() and [[ to find it, then coerce it to a data frame with html_table():

lego_movie %>%

  html_nodes("table") %>%

  .[[3]] %>%

  html_table()

#>                                              X 1            NA

#> 1 this movie is very very deep and philosophical   mrdoctor524

#> 2 This got an 8.0 and Wizard of Oz got an 8.1...  marr-justinm

#> 3                         Discouraging Building?       Laestig

#> 4                              LEGO - the plural      neil-476

#> 5                                 Academy Awards   browncoatjw

#> 6                    what was the funniest part? actionjacksin

Other important functions

•    If you prefer, you can use xpath selectors instead of css: html_nodes(doc, xpath = "//table//td")).

•    Extract the tag names with html_tag(), text with html_text(), a single attribute with html_attr() or all attributes with html_attrs().

•    Detect and repair text encoding problems with guess_encoding() and repair_encoding().

•    Navigate around a website as if you’re in a browser with html_session(), jump_to(), follow_link(), back(), and forward(). Extract, modify and submit forms with html_form(), set_values() and submit_form(). (This is still a work in progress, so I’d love your feedback.)

To see these functions in action, check out package demos with demo(package = "rvest").

Source: http://blog.rstudio.org/2014/11/24/rvest-easy-web-scraping-with-r/

Friday 27 March 2015

Make Your Business More Intelligent with Web Data Extraction services

Data extraction is that the most practiced technique which will assist you realizes the pertaining knowledge for your existing business or any personal use. Many times, we discover that experts’ copy and paste data manually from web content or transfer the complete web site that may be a waste of your time and energy.

Now with the new technique of Data extraction you'll crawl through hundreds and many web content so as to extract specific knowledge and at the very same time save this information or data within the following manner.

  •     CSV FILE
  •     XML FILE or Any other custom format for future use.

Below given are some instances of Data extraction process:
  •     Conduct a government portal, extracting names of voters for a survey
  •     Seek for competitor websites for product valuation and information on features
  •     Utilize web scraping to download images from a stock photography site for website design

How can Data Extraction serve you?

 You can extract data from any kind of websites like


Extract Data from any kind of Websites: Directories, Classified Websites, News Websites, Blogs, Articles, Job Portals, Search Engines, Ecommerce Websites, Social Media Websites and any kind of websites whose content can be accessible. Extract Emails, Contacts, Price/Rate, Features, Contact Names, Contact Details, Full Text, Live updates, ASINs, Meta Tags, Address, Phone, Fax, Latitude & Longitude, Images, Links, Reviews, Ratings, etc. Help in Data Collection, Competitor Analysis, Research, Business Intelligence, Social Media Trend analysis, Brand Monitoring, Lead Data Collection, Website & Competitor Web Monitoring, etc. Deliver Data in any Database, Excel, CSV, Access, Text, My SQL, SQL, Oracle, etc. and in any format Custom Services of Web Data Extraction as per client need one time Data Delivery or Continued/Scheduled Data Delivery

The next one is Website Data Scraping:

 Website Data Scraping is that method of extracting such information or data from web site by utilising specific software system program accessible from evidenced web site solely.

This extracted data may be utilised by somebody and for any functions as per their requirements; data extracted may be employed in totally different industries. There are a unit several corporations providing best website data scraping services.

It is one such field that has active developments and conjointly shares a standard objective that wants breakthrough within the following:
  •     Text Processing
  •     Semantic Understanding
  •     Artificial Intelligence
  •     Human Computer Interactions

There are several users or finish users, corporations and specialists that require info or information that's accessible in some or the opposite format. In such cases Web Data Extraction will tailor the necessity of extracting information from any tested supply and preserve the information on a selected destination.

The source platform contains:
  •     Excel
  •     CSV
  •     MySQL and
  •     Others

Websitedatascraping.com is enough capable to web data scraping, website data scraping, web scraping services, website scraping services, data scraping services, product information scraping and yellowpages data scraping.

Tuesday 17 March 2015

Safeguarding the Future Through Data Mining

Web scraping can be a powerful tool not only in business and research. In fact, it has the capacity to protect the future by its predicting power. You may find this declaration incredible; but data mining is indeed a tangible way of predicting future events and thus protecting life in the future.

With the thousands of years of existence on earth, humans are able to gather as much information and experience to have a glimpse of what is to come. With the cycles of changes in the environment and in the whole universe aside from the human behavior, so much can be learned and applied.

At least three major things can be determined by careful and diligent data mining. These are: future threats; future trends; and future tactics.

Future threats

According to reports, the US intelligence agencies have been using web extraction as a way of studying the present and past terrorism acts and personages to predict future terrorist events. This has been actively done since the year 2010.

Data is gathered about a known terrorist such as: his activities; his contacts; his routines; the places he frequents; and other related information. These data are analyzed and classified. Any suspicious activities as well as unusual contact are monitored closely. Through these stored data and monitoring processes, any untoward activities can be precluded and preempted. You may say that terrorists can be using data mining too; and that is obviously possible. In this way, web scraping can also be used as a weapon for destruction. There is then a need for the government agencies to be very careful in protecting their data so that the enemies cannot retrieve them.

In the overall picture, you can just imagine how many lives, trauma, and damage can be prevented if future terrorist activities are prevented.

Moreover, climate change is another phenomenon that has already been predicted and is beginning to occur nowadays. Scientists have been studying the effects of global warming and environmental degradation through online data too. So much information drives and warning have been published by scholarly papers and by the experts but many of these have remained unheeded. Now that erratic weather conditions are happening, people can only regret and feel guilty that they are part of the cause of the problem.

However, it is not really too late to do some actions. People can avoid places where abnormal conditions are expected to happen; they can do some measures to protect themselves; and they can be informed ahead of time before anything catastrophic could happen.

Future trends

In relation to the predictions of possible threats, data extraction can also predict future trends. This is most helpful in businesses because they can be helped to produce items and employ strategies that will suit the expected patrons and clients.  Since history tends to repeat itself, data gathered in the past and present if studied judiciously and compared intelligently can bring in positive results.

Oftentimes, the companies that study their books as well as of those who have gone before them can gain more knowledge and expertise that will surely put them ahead of their contemporaries.

Future tactics

Naturally, along with knowing the possible events and trends in the future, strategies and ways to combat threats and cope with trends can also be predicted through web scraping.

Safeguarding the future is no longer a dream or wish. As early as today, experts can create equipment, structures, strategies, and even weapons to prevent any untoward incidents and collateral damage.

Studying the strengths and weaknesses of the past and present plans, procedures, and tools can lead to better technologies and techniques. The future can be a better and safer place if people can learn from the mistakes of the past and go from good to better.

The statement: “The best is yet to come,” will finally be realized if proper management of data and information collected and analyzed through web scraping will be conducted.

Bright future

Looking at the horizon, one can always expect the sun to shine and bring in a bright day. This same positive expectation for the future is indeed possible. Thanks to data mining; life can be handled more securely and precisely.

It does not mean that humans have become gods. It only proves that a person’s talents and skills, when used properly can make his/her future brighter and more successful. On the other hand, carelessness and lack of sensibilities to other people and the environment can surely bring in future doom.

Everything is laid bare and you are given the chance to handle the present with enough wisdom and capabilities. Although the world is too big to be understood and there is still a huge field of knowledge to be conquered, life can surely go on positively.

Source:http://www.loginworks.com/blogs/web-scraping-blogs/257-safeguarding-the-future-through-data-mining/

Monday 16 March 2015

About Web Scraping and Web Data Mining

Web scraping as suggested by the name simply refers to method of obtaining data or information from an array of websites. This has greatly revolutionized how companies obtain data and put this obtained data into significant use.This has greatly helped many firms and individuals in different spheres transact in the most effective ways. With the internet offering a very large base for obtaining information, various companies have resorted to use web scraping as method of getting information. Relevance of needed data is of great importance since it is the determining factor of what and how data has to be scraped. A good number of data mining companies have come up to help individuals and organizations get the information as per their requirements. These companies also provide software which when installed in a computer may be of great importance to the business.

Depending on the organizations background it may opt to choose between mining the data for themselves or actually employing the services of a data mining company. These organizations have data mining experts who actually are better suited to handle data professionally. As it sounds it is not every ones job since it is computer languages that are used therefore it is only those with relevant knowledge that are well suited to handle this jobs.

Although short cuts are considered dangerous, in this case hiring the services of this experts may bring with it better results with minimized costs than doing the job internally What brings about the idea of a company going for a particular data mining company? For obvious reasons it may because of some eye catching advertisement somewhere. Mostly businesses do extensive research when it comes to making this choice and depending on the gravity sensitivity and format of sort data. These companies provide the needed data in the most comprehensive and in the needed format hence wastage of time as a resource is unheard in these businesses.

web scraping and data mining Research done by these companies ought to be converted to meaningful results. This is done by carrying out meaning full analysis and deducing relevant conclusions from this data. This operation greatly improves the efficiency of these organizations. Analysis done on this is carried by asking relevant questions regarding some aspects regarding the operations of the business. Questions that are frequently asked may take a certain form like, how viable could this be in the short run? In the quest of these answers it is then that meaningful analysis is done.

Data mining companies have given the business word a major boost in the accusation of data from the internet which may now be considered as the largest information resource. It also through this data mining that results from surveys across various parts of the world is acquired. This eliminates the need for this company carrying out these researches for themselves .Of particular importance is the aspect of language barriers making an individual company seeks the services of a reputable data mining firm. Therefore from all the above considerations it can be deduced that, if only a business is to achieve its objectives only proper web scraping techniques from reputable firms should be employed.

Web data mining refers to ways and methods which are used in seeking information from a wide array of websites. Companies and individuals employ this in their daily operations since in the current word lack of information can be a major setback. This is simply because it is through data mining that enterprises will be able to market their businesses, get information regarding upcoming promotions and sometimes make relevant discoveries. This explains why companies are investing lots of resources all aimed in the search of relevant information. After this organizations have gathered this information implementation of obtained information is vital. Therefore data mining should be coupled with relevant implementation to count it as a success. Analysis of information is quite important .Since bulky information obtained may not be necessarily required in that form. This is done using software that is capable of giving a good analysis of the data.

Analysis is done in different ways depending on requirements like one may require a summary or categorized data. Previously when data mining was not greatly used software developments were minimal .Recently there are great improvements in this field since many organizations are embracing it particularly those in third world countries. This has greatly improved turnovers of a good number of companies. Previously many firms used to collect data manually. This of course proved to be a very slow method and a tiring one. Many firms had to increase their staff to cope with this bulk in workload. Considering this age where capitalism is thriving they risked being faced out since it is the case where only the strongest survive and the weaker ones pave way. Time was a factor in realizing an increase in number of organization using this technologically advanced approach. Usage of this collected information is actually used to foster better growth of the company.

Based on previous findings from various analysis techniques are employed by the firms employees irrespective of their ranks. For examples managers may employ web data mining to make good and realistic decisions that are consistent or better than those which other companies are using. Information may also aid in improvement of business infrastructure since in the modern word infrastructure development is on the rise. Better infrastructure acquired on time will actually give the business a big thrust ahead of their competitors. With the relevant data mining software’s companies are able to do meaningful research and developments. For instance companies are able to address issues regarding their customers preferences, their individual wants .

A company may easily know who their customers are, what they are their likes and also their economic abilities. This can be used when the company decides to give offers, discounts and many other customer enticing actions. For example a soft drink company will have to place its promotions during winter since it is then that their customers are unlikely to buy their services. With this great increase in number of resources on the websites no individual can afford to ignore data mining.

Source: http://www.loginworks.com/blogs/web-scraping-blogs/247-web-scraping-and-web-data-mining/

Monday 9 March 2015

Online Retail - Mining for Gold

Online retailers live in an ever-changing environment, and the ability to stay competitive is the difference between doing well and doing nothing. In today's fast paced internet market place, if you aren't using web scraping, you are missing a key component to growing your business.

Data Mining

Data mining your competition's prices and services and making sure your prices and services are similar, or even lower, is what makes the difference. Why should your customer choose you if they can get the same product somewhere else for less? What data you collect and how often you update it is also another key ingredient to success.

Extract Website Data

Web scraping allows you to gather information from your competition and use it improve your position in the market. When you extract website data from your competitor's website, it allows you to conduct business from a position that doesn't involve guess work. The internet is an environment that is constantly being updated and changed. It is vital that you have the ability to have up-to-date information on what others in your market are doing. If you can't do this, you really can't compete.

Application of Information

When you know what your competitors are doing all the time, you can keep your business a little more competitive than they are. When you have information such as monthly and even weekly price variations in the market and what products and services are being offered, you can apply that information to your own pricing matrix and ensure a competitive edge in your market.

An Army of One


Web scraping gives you the ability to see what is going on in the market at all times. You can monitor just about anything you choose with a web scraping service. Many online retailers are very small operations and they don't have the resources to constantly monitor each competitor's website - so engaging a web scraping service is like having your own marketing and research team working for you night and day to keep tabs on them. You choose what it is you want to know, and your research team goes to work. Simple.

Staying Ahead of Trends

Having the ability to recognize trends is the key to any business, especially on the internet were information is so fluid. The business that can identify a trend quickly and take advantage of it will always stay one step ahead. That's why big corporations have teams dedicated to researching market trends and predictions. If you can see where something is going, you can always get ahead of it. That's what web scraping can help you do - identify those trends in your market so you can get in ahead of the pack.

A Helping Hand

Sometimes running your own online retail business can be a daunting and lonely ordeal. Even those that have a great deal of experience with the internet can feel lost at times. A web scraping service is a tool you can use to help yourself in such times. Web scraping is automated and precise, and it gives you the ability to have vital information delivered to you in a manner you can understand and use. It's one less thing to worry about - and the information you get from data mining is what every business owner actually should worry about - what the competition is doing? With a web scraping service, you can concern yourself with other things - like making more profits.

Source: http://ezinearticles.com/?Online-Retail---Mining-for-Gold&id=6531024

Wednesday 4 March 2015

Outsource Data Mining Services to Offshore Data Entry Company

Companies in India offer complete solution services for all type of data mining services.

Data Mining Services and Web research services offered, help businesses get critical information for their analysis and marketing campaigns. As this process requires professionals with good knowledge in internet research or online research, customers can take advantage of outsourcing their Data Mining, Data extraction and Data Collection services to utilize resources at a very competitive price.

In the time of recession every company is very careful about cost. So companies are now trying to find ways to cut down cost and outsourcing is good option for reducing cost. It is essential for each size of business from small size to large size organization. Data entry is most famous work among all outsourcing work. To meet high quality and precise data entry demands most corporate firms prefer to outsource data entry services to offshore countries like India.

In India there are number of companies which offer high quality data entry work at cheapest rate. Outsourcing data mining work is the crucial requirement of all rapidly growing Companies who want to focus on their core areas and want to control their cost.

Why outsource your data entry requirements?

Easy and fast communication: Flexibility in communication method is provided where they will be ready to talk with you at your convenient time, as per demand of work dedicated resource or whole team will be assigned to drive the project.

Quality with high level of Accuracy: Experienced companies handling a variety of data-entry projects develop whole new type of quality process for maintaining best quality at work.

Turn Around Time: Capability to deliver fast turnaround time as per project requirements to meet up your project deadline, dedicated staff(s) can work 24/7 with high level of accuracy.

Affordable Rate: Services provided at affordable rates in the industry. For minimizing cost, customization of each and every aspect of the system is undertaken for efficiently handling work.

Outsourcing Service Providers are outsourcing companies providing business process outsourcing services specializing in data mining services and data entry services. Team of highly skilled and efficient people, with a singular focus on data processing, data mining and data entry outsourcing services catering to data entry projects of a varied nature and type.

Why outsource data mining services?

360 degree Data Processing Operations

Free Pilots Before You Hire

Years of Data Entry and Processing Experience

Domain Expertise in Multiple Industries

Best Outsourcing Prices in Industry

Highly Scalable Business Infrastructure

24X7 Round The Clock Services

The expertise management and teams have delivered millions of processed data and records to customers from USA, Canada, UK and other European Countries and Australia.

Outsourcing companies specialize in data entry operations and guarantee highest quality & on time delivery at the least expensive prices.

Herat Patel, CEO at 3Alpha Dataentry Services possess over 15+ years of experience in providing data related services outsourced to India.

Visit our Facebook Data Entry profile for comments & reviews.

Our services helps to convert any kind of  hard copy sources, our data mining services helps to collect business contacts, customer contact, product specifications etc., from different web sources. We promise to deliver the best quality work and help you excel in your business by focusing on your core business activities. Outsource data mining services to India and take the advantage of outsourcing and save cost.

Source:http://ezinearticles.com/?Outsource-Data-Mining-Services-to-Offshore-Data-Entry-Company&id=4027029

Friday 27 February 2015

Choose the Best Data Mining Company With This Simple Rule

Data mining is the analysis part of the knowledge finding in databases. It involves finding patterns in large data sets including processes like artificial intelligence, machine learning, statistics, and database systems. The main reason why companies do data mining is to transform a large set of data into understandable block of information that can be used for market knowledge. It allows companies to make informed business decisions.

Data mining was looked upon as a luxury until some time back, but businesses are waking up to the importance of the process by seeing the difference it makes. Most of the multinational corporations already have mining integrated as one of their core processes. Many companies don't make strategic decisions unless they have the complete data converted into useful information using mining techniques. However, it is not a cheap process and would require being put to good use in order to be able to justify its cost. This results in a demand of a data mining company that could fulfill the client's needs by being resourceful and economical at the same time.

Searching for the perfect data mining company for your business could become a lot easier if you follow one simple rule. The rule is to make sure you make enough strategic decisions that result in good profit or at least break even with a single session of mining the data, which allows you to justify the cost you put into the whole process. Then, choose the company that offers you the best quotation which allows you to maximize your profits and improve your business processes even more.

Most companies are not very stringent with their plans and pricing and would be happy to go that extra mile in order to help the client. That extra mile could include offering a discount on the whole process, or offering added services or extended time period in the same package and price as quoted. The way you negotiate with the company will decide the profit that you will make from the entire data mining process.

Data mining will not only improve your business decisions, it will improve your business processes as a whole. If used correctly, it will allow you to extract more out of the limited resources. It allows you to have comprehensive real time market knowledge that always keeps you ahead of your competitors. Therefore, putting in a few extra bucks to integrate it into your core business process is a really good idea. As mentioned earlier, if used correctly then it will not only justify its own cost but also increase profits manifold.

Choose the right company by integrating the whole process in your business and make the most of the market knowledge that is present on the internet. The power to make the best and the most informed decisions lies in your own hands, and data mining is one approach that will certainly get you a lot closer to your business goals.

Source:http://ezinearticles.com/?Choose-the-Best-Data-Mining-Company-With-This-Simple-Rule&id=8784911

Tuesday 24 February 2015

Data Extraction Services - A Helpful Hand For Large Organization

The data extraction is the way to extract and to structure data from not structured and semi-structured electronic documents, as found on the web and in various data warehouses. Data extraction is extremely useful for the huge organizations which deal with considerable amounts of data, daily, which must be transformed into significant information and be stored for the use this later on.

Your company with tons of data but it is difficult to control and convert the data into useful information. Without right information at the right time and based on half of accurate information, decision makers with a company waste time by making wrong strategic decisions. In high competing world of businesses, the essential statistics such as information customer, the operational figures of the competitor and the sales figures inter-members play a big role in the manufacture of the strategic decisions. It can help you to take strategic business decisions that can shape your business' goals..

Outsourcing companies provide custom made services to the client's requirements. A few of the areas where it can be used to generate better sales leads, extract and harvest product pricing data, capture financial data, acquire real estate data, conduct market research , survey and analysis, conduct product research and analysis and duplicate an online database..

The different types of Data Extraction Services:

•    Database Extraction:

•    Reorganized data from multiple databases such as statistics about competitor's products, pricing and latest offers and customer opinion and reviews can be extracted and stored as per the requirement of company.

•    Web Data Extraction:

•    Web Data Extraction is also known as data Extraction which is usually referred to the practice of extract or reading text data from a targeted website.

Businesses have now realized about the huge benefits they can get by outsourcing their services. Then outsourcing is profitable option for business. Since all projects are custom based to suit the exact needs of the customer, huge savings in terms of time, money and infrastructure are among the many advantages that outsourcing brings.

Advantages of Outsourcing Data Extraction Services:

•    Improved technology scalability
•    Skilled and qualified technical staff who are proficient in English
•    Advanced infrastructure resources
•    Quick turnaround time
•    Cost-effective prices
•    Secure Network systems to ensure data safety
•    Increased market coverage

By outsourcing, you can definitely increase your competitive advantages. Outsourcing of services helps businesses to manage their data effectively, which in turn would enable them to experience an increase in profits.

Outsourcing Web Research offer complete Data Extraction Services and Solutions to quickly collective data and information from multiple Internet sources for your Business needs in a cost efficient manner.

Source: http://ezinearticles.com/?Data-Extraction-Services---A-Helpful-Hand-For-Large-Organization&id=2477589