Gain hands-on experience in building efficient and scalable big data architecture on Kubernetes, utilizing leading technologies such as Spark, Airflow, Kafka, and Trino
Key Features:
- Leverage Kubernetes in a cloud environment to integrate seamlessly with a variety of tools
- Explore best practices for optimizing the performance of big data pipelines
- Build end-to-end data pipelines and discover real-world use cases using popular tools like Spark, Airflow, and Kafka
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description:
In today's data-driven world, organizations across different sectors need scalable and efficient solutions for processing large volumes of data. Kubernetes offers an open-source and cost-effective platform for deploying and managing big data tools and workloads, ensuring optimal resource utilization and minimizing operational overhead. If you want to master the art of building and deploying big data solutions using Kubernetes, then this book is for you.
Written by an experienced data specialist, Big Data on Kubernetes takes you through the entire process of developing scalable and resilient data pipelines, with a focus on practical implementation. Starting with the basics, you'll progress toward learning how to install Docker and run your first containerized applications. You'll then explore Kubernetes architecture and understand its core components. This knowledge will pave the way for exploring a variety of essential tools for big data processing such as Apache Spark and Apache Airflow. You'll also learn how to install and configure these tools on Kubernetes clusters. Throughout the book, you'll gain hands-on experience building a complete big data stack on Kubernetes.
By the end of this Kubernetes book, you'll be equipped with the skills and knowledge you need to tackle real-world big data challenges with confidence.
What You Will Learn:
- Install and use Docker to run containers and build concise images
- Gain a deep understanding of Kubernetes architecture and its components
- Deploy and manage Kubernetes clusters on different cloud platforms
- Implement and manage data pipelines using Apache Spark and Apache Airflow
- Deploy and configure Apache Kafka for real-time data ingestion and processing
- Build and orchestrate a complete big data pipeline using open-source tools
- Deploy Generative AI applications on a Kubernetes-based architecture
Who this book is for:
If you're a data engineer, BI analyst, data team leader, data architect, or tech manager with a basic understanding of big data technologies, then this big data book is for you. Familiarity with the basics of Python programming, SQL queries, and YAML is required to understand the topics discussed in this book.
Table of Contents
- Getting Started with Containers
- Kubernetes Architecture
- Kubernetes - Hands On
- The Modern Data Stack
- Big Data processing with Apache Spark
- Apache Airflow for building pipelines
- Apache Kafka for real time events and data ingestion
- Deploying the Big Data Stack on Kubernetes
- Data consumption layer
- Building a Big Data Pipeline on Kubernetes
- AI/ML Workloads on Kubernetes
- Where to go from here