Big data application development: An introduction to Hadoop

The numbers look promising: IDC forecasts the worldwide revenues for big data and business analytics will reach $150.8 billion in 2017, an increase of 12.4% compared with the 2016 sales.

However, big data application development is challenging. With the growth of mobile, social media, and the internet of things, the volume of data that enterprises collect has been increasing.

“Traditional database management systems (DBMSs) do not easily scale to support very large data sets,” noted Geneva Lake, vice president of worldwide alliances at MapR Technologies Inc. Also, old-school systems do not work well with unstructured information such as video.

A new generation of DBMS technology emerged to fill the gaps. Hadoop began as the Google File System, an idea first discussed in the fall of 2003. By early 2006, the work had evolved into an open source project, and development was turned over to the Apache Software Foundation.

Hadoop is an open source database management system for processing large data sets using the MapReduce programming model. The software runs on clusters of commodity hardware. Leading Hadoop distributions come from vendors such as Cloudera Inc., Hortonworks Inc. and MapR Technologies, all of which run partner programs for channel companies.

Big data application development

The complexity of big data application development and deployment is opening doors for channel partners that can help make their customers’ projects successful. The task requires more than finding powerful hardware and software. Companies want to mine their information — use it…

Read the full article from the Source…

Back to Top