IES Management College And Research Centre

Image from Google Jackets

Data analytics with Hadoop : an introduction for data scientists Bengfort, Benjamin

By: Publication details: Shroff Publishing House 2016 MumbaiDescription: XVI, 268ISBN:
  • 978-93-5213-374-1
Subject(s): DDC classification:
  • 005.7 Ben/Kim
Contents:
Introduction to Distributed Computing Chapter 1. The Age of the Data Product What Is a Data Product? Building Data Products at Scale with Hadoop The Data Science Pipeline and the Hadoop Ecosystem Conclusion Chapter 2. An Operating System for Big Data Basic Concepts Hadoop Architecture Working with a Distributed File System Working with Distributed Computation Submitting a MapReduce Job to YARN Conclusion Chapter 3. A Framework for Python and Hadoop Streaming Hadoop Streaming A Framework for MapReduce with Python Advanced MapReduce Conclusion Chapter 4. In-Memory Computing with Spark Spark Basics Interactive Spark Using PySpark Writing Spark Applications Conclusion Chapter 5. Distributed Analysis and Patterns Computing with Keys Design Patterns Toward Last-Mile Analytics Conclusion Workflows and Tools for Big Data Science Chapter 6. Data Mining and Warehousing Structured Data Queries with Hive HBase Conclusion Chapter 7. Data Ingestion Importing Relational Data with Sqoop Ingesting Streaming Data with Flume Conclusion Chapter 8. Analytics with Higher-Level APIs Pig Spark’s Higher-Level APIs Conclusion Chapter 9. Machine Learning Scalable Machine Learning with Spark Conclusion Chapter 10. Summary: Doing Distributed Data Science Data Product Lifecycle Machine Learning Lifecycle Conclusion Appendix Creating a Hadoop Pseudo-Distributed Development Environment Quick Start Setting Up Linux Installing Hadoop Appendix Installing Hadoop Ecosystem Products Packaged Hadoop Distributions Self-Installation of Apache Hadoop Ecosystem Products
Summary: Ready to use statistical and machine-learning techniques across large data sets? This practical guide shows you why the Hadoop ecosystem is perfect for the job. Instead of deployment, operations, or software development usually associated with distributed computing, you’ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher order data workflows this framework can produce. Data scientists and analysts will learn how to perform a wide range of techniques, from writing MapReduce and Spark applications with Python to using advanced modeling and data management with Spark MLlib, Hive, and HBase. You’ll also learn about the analytical processes and data systems available to build and empower data products that can handle—and actually require—huge amounts of data. Understand core concepts behind Hadoop and cluster computing Use design patterns and parallel analytical algorithms to create distributed data analysis jobs Learn about data management, mining, and warehousing in a distributed context using Apache Hive and HBase Use Sqoop and Apache Flume to ingest data from relational databases Program complex Hadoop and Spark applications with Apache Pig and Spark DataFrames Perform machine learning techniques such as classification, clustering, and collaborative filtering with Spark’s MLlib
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode Item holds
Book Book Main Library 005.7/Ben/Kim/32851 (Browse shelf(Opens below)) Available 11132851
Total holds: 0

Introduction to Distributed Computing
Chapter 1. The Age of the Data Product
What Is a Data Product?
Building Data Products at Scale with Hadoop
The Data Science Pipeline and the Hadoop Ecosystem
Conclusion
Chapter 2. An Operating System for Big Data
Basic Concepts
Hadoop Architecture
Working with a Distributed File System
Working with Distributed Computation
Submitting a MapReduce Job to YARN
Conclusion
Chapter 3. A Framework for Python and Hadoop Streaming
Hadoop Streaming
A Framework for MapReduce with Python
Advanced MapReduce
Conclusion
Chapter 4. In-Memory Computing with Spark
Spark Basics
Interactive Spark Using PySpark
Writing Spark Applications
Conclusion
Chapter 5. Distributed Analysis and Patterns
Computing with Keys
Design Patterns
Toward Last-Mile Analytics
Conclusion
Workflows and Tools for Big Data Science
Chapter 6. Data Mining and Warehousing
Structured Data Queries with Hive
HBase
Conclusion
Chapter 7. Data Ingestion
Importing Relational Data with Sqoop
Ingesting Streaming Data with Flume
Conclusion
Chapter 8. Analytics with Higher-Level APIs
Pig
Spark’s Higher-Level APIs
Conclusion
Chapter 9. Machine Learning
Scalable Machine Learning with Spark
Conclusion
Chapter 10. Summary: Doing Distributed Data Science
Data Product Lifecycle
Machine Learning Lifecycle
Conclusion
Appendix Creating a Hadoop Pseudo-Distributed Development Environment
Quick Start
Setting Up Linux
Installing Hadoop
Appendix Installing Hadoop Ecosystem Products
Packaged Hadoop Distributions
Self-Installation of Apache Hadoop Ecosystem Products

Ready to use statistical and machine-learning techniques across large data sets? This practical guide shows you why the Hadoop ecosystem is perfect for the job. Instead of deployment, operations, or software development usually associated with distributed computing, you’ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher order data workflows this framework can produce.

Data scientists and analysts will learn how to perform a wide range of techniques, from writing MapReduce and Spark applications with Python to using advanced modeling and data management with Spark MLlib, Hive, and HBase. You’ll also learn about the analytical processes and data systems available to build and empower data products that can handle—and actually require—huge amounts of data.
Understand core concepts behind Hadoop and cluster computing
Use design patterns and parallel analytical algorithms to create distributed data analysis jobs
Learn about data management, mining, and warehousing in a distributed context using Apache Hive and HBase
Use Sqoop and Apache Flume to ingest data from relational databases
Program complex Hadoop and Spark applications with Apache Pig and Spark DataFrames
Perform machine learning techniques such as classification, clustering, and collaborative filtering with Spark’s MLlib

There are no comments on this title.

to post a comment.

Circulation Timings: Monday to Saturday: 8:30 AM to 9:30 PM | Sundays/Bank Holiday during Examination Period: 10:00 AM to 6:00 PM