Key Factors For Web Data Curation Using Artificial Intelligence

Key Factors For Web Data Curation Using Artificial Intelligence
ey Factors for web data curation using Artificial Intelligence Thumbnail
Saumitra Shubham
Posted by on December 8, 2017 in Blog

Are you spending a lot of human effort analyzing data on various websites manually? Are your knowledge workers getting frustrated? Are you interested in automating the tedious stuff? Do you want to do more with less?

Welcome to the world of Artificial Intelligence for web data curation!

“Web data curation is a process of extracting web data, storing and preparing data for further analysis”

Web Data Curation

Application of Artificial Intelligence has helped increase operational efficiency and analyze more content with less effort. On another side of it, though there is a lot of potential for applying Artificial Intelligence in various business functions, business leaders often hesitate to invest in it due to lack of awareness about AI’s capabilities. Hence, Convincing businesses for applying Artificial Intelligence becomes challenging. Ellicium has developed a comprehensive methodology to realize the potential that AI has to make value addition to businesses.  Taking AI for web data curation as subject matter, this article elaborates on it.

After having helped multiple businesses ranging from start-ups to multi-billion-dollar organizations we have identified a few key factors for web data curation using Artificial Intelligence:

1.Identification of reliable and relevant web source – There are a humungous number of web sources for every domain which claim to hold relevant and up to date data. The quality of insights depends on the quality of data. Following points are vital to determine quality data sources:

  • Consulting domain experts
  • Defining rankings and ratings of websites using an AI program based on following parameters: Number of hits, Frequency of updating of content on a website and validation of content by comparing it with different sources
  • Using websites with public content

2.Define appropriate web monitoring frequency– Content of website changes periodically based on certain events but the time between consecutive changes is generally not constant.  We set up monitoring frequency depending on the type of a website and on requirements.

3.Consider variations in website layouts– The different web data sources have a different format which requires format specific curation process. Generalizing the overall curation process is a challenge here because:

  • Every web source can have different metadata or structure
  • The content on some web sources is dynamic where content gets loaded on certain manual events like scrolling, clicking or hovering.
  • With the emergence of new GUI technologies to improve user experience, many of the web sources change the structure of pages but this also makes maintenance of AI engine absolutely necessary.

4.Define limits for Web Crawler – A drill down approach is often used while crawling the data from different web sources. But defining the depth for crawling is a challenge.

5.Understand data security and accessibility policies– Many websites have security policies for robotic data extraction and the challenge is to tune the process based on these policies to avoid conflict with the policies.

6.Data format for different web data sources is difficult to generalize – For every web source there is a different data format with different schema. Defining a common meta-data for each of them is a challenge. And in comes NoSQL database which helps you store data with different schemas.

7.Wisely chose algorithm to determine relevant data – To know how we choose and tune different algorithms refer our article here

8.Design for Scalability – To handle ever-growing web data, we design a system which is scalable. We have successfully implemented various machine learning algorithms leveraging native parallelism of commodity hardware to speed up the AI process.

As can be observed in many of today’s businesses like banks using social media analysis for credit rating or LPOs using web crawlers to keep themselves updated on legal happenings, there are a lot of actionable insights that can be drawn from a large amount of well-curated web data. We hope that this article helps take steps towards growing your business by getting these insights while saving resources.

 

This article is co-authored by Saumitra Modak