Welcome to OLake
OLake
Fastest open-source tool for replicating Databases to Apache Iceberg or Data Lakehouse. ⚡ Efficient, quick and scalable data ingestion for real-time analytics. Visit olake.io for the full documentation, and benchmarks
Introduction to OLake
Welcome to OLake – the fastest open source DB-to-Data LakeHouse pipeline designed to bring your Database (Postgres, MySQL, MongoDB) data into modern analytics ecosystems like Apache Iceberg. OLake was born out of the need to eliminate the toil of one-off ETL scripts, combat performance bottlenecks, and avoid vendor lock-in with a clean, high-performing solution.
GitHub Repository: https://github.com/datazip-inc/olake
Overview
OLake’s primary goal is simple: to provide the fastest data pipeline from your database to a Data LakeHouse—in this case, Apache Iceberg. With OLake you can:
- Capture data efficiently: Start with a full snapshot of your Database tables / collections, then transition seamlessly to near real-time Change Data Capture (CDC) using each database change streams methods (WAL, binlogs, oplogs).
- Achieve high throughput: Utilize parallelized chunking and integrated Destinations to handle large volumes of data—ensuring rapid full loads and lightning-fast incremental updates.
- Maintain schema integrity: Detect and adapt to evolving document structures with built-in alerts for any schema changes.
- Embrace openness: Store your data in open formats (e.g., Parquet and Apache Iceberg) to keep your analytics engine agnostic and avoid vendor lock-in.
Features:
- Iceberg Writer: OLake writes data in Apache Iceberg format, which is an open table format for huge analytic datasets.
- Open Formats: OLake stores data in open formats like Parquet and Apache Iceberg, ensuring that your data is accessible and queryable by a variety of tools.
- AWS S3 Partitioning: OLake supports partitioning data in S3, allowing for efficient storage and retrieval of data.
- Iceberg Partitioning: OLake supports partitioning data in Apache Iceberg, allowing for efficient querying and
- Apache Iceberg Catalog Support: OLake supports various catalog types, including REST, Hive, AWS Glue, and JDBC, allowing for easy integration with your existing data lake infrastructure.
- Parallelized Chunking: OLake divides the data into smaller chunks for faster processing and parallel execution.
- Change Data Capture (CDC): OLake captures changes in the source database in near real-time, ensuring that your data lake is always up-to-date.
- Schema Evolution: OLake can handle changes in the schema of the source database, ensuring that your data lake remains consistent and accurate. data management.
- Schema Datatype Changes: OLake supports schema datatype changes natively to what iceberg v2 tables support.
- Source Database Connectors: OLake provides connectors for various databases, including MongoDB, Postgres, and MySQL, allowing for easy integration with your existing data sources.
- Data Flattening / Normalization: OLake supports Level 1 flattening of Data for JSON type fields, allowing for easier querying and analysis of complex data structures.
- Synchronization Modes: OLake supports both full and CDC (Change Data Capture) synchronization modes. Incremental sync is a work in progress and will be released soon.
- Sync with state: OLake supports syncing with state, meaning, it can keep track of the last synced state and only sync new or updated data.
- Resumable Full Load: OLake supports resumable full load, meaning, if a full load fails, it can be resumed from the last successful checkpoint.
- Backoff Retry Count: OLake supports backoff retry count, meaning, if a sync fails, it will retry the sync after a certain period of time.
Architectural Overview
OLake is designed as a modular, high-performance system with distinct components that each excel at their core responsibilities. The following diagram (see architecture cover image) provides a visual overview of how data flows through OLake:
Data Flow in OLake
-
Initial Snapshot:
- Executes a full collection read from Databasees by firing queries.
- Divides the collection into parallel chunks for rapid processing.
-
Change Data Capture (CDC):
- Sets up Database change streams based on oplogs to capture near real-time updates.
- Ensures any changes that occur during the snapshot are also captured.
-
Parallel Processing:
- Users can configure the number of parallel threads thus balancing speed and the load on your Database cluster.
-
Transformation & Normalization:
- Flattens complex, semi-structured fields into relational streams.
- Provides basic (Level 0) flattening now, with more advanced nested JSON support on the way.
-
Integrated Writes:
- Pushes transformed data directly to target destinations (e.g., local Parquet, S3 Iceberg Parquet) without intermediary buffering.
- This integration minimizes latency and avoids blocking reads.
-
Monitoring & Alerts:
- Continuously monitors schema changes and system metrics.
- Raises alerts for any discrepancies or potential issues, ensuring early detection of data loss or transformation errors.
-
Logs & Testing:
- Provides detailed logging for transparency.
- Supports unit, integration, and workflow testing, ensuring the reliability of both full loads and incremental syncs.
Core Components
OLake is built around several key modules:
-
CLI & Commands:
Offers commands likespec
,check
,discover
, andsync
for seamless pipeline orchestration. Configurable flags (such as--batch_size
and--thread_count
) allow you to optimize performance. -
Framework (CDK):
The robust foundation that powers OLake’s orchestration and modular design. -
Connectors (Drivers):
Each driver encapsulates the logic required to interact with a specific source system. The drivers (connector/ source database) manage:- Full Load: Efficiently partitioning and processing large collections.
- CDC: Setting up and maintaining change streams to capture incremental changes.
- Incremental Sync: Ensuring that only new or modified data is processed after the initial snapshot. (WIP, release soon)
- Schema Discovery: Automatically identifying the schema of the source data.
- Schema Evolution: Detecting and adapting to changes in the source schema.
- Data Transformation: Flattening complex structures into a more manageable format. We currently support basic (Level 1) flattening, with plans for more advanced nested JSON handling in the future.
-
Destinations:
- Tightly integrated with drivers, Destinations (Apache Iceberg format) ensure that once data is extracted, it is immediately pushed to your chosen destination—whether that be local storage or cloud-based solutions.
- AWS S3 partitioning is supported, allowing you to store data in a structured manner that aligns with your analytical needs.
- Iceberg data partitioning is also supported, enabling efficient querying and data management.
-
Monitoring & Alerting:
An integrated system to keep track of process status, performance metrics, and schema evolution. -
SDK & Testing Setup:
Provides an SDK for custom integrations and a comprehensive testing suite to ensure robust data synchronization.
OLake in a Lakehouse Ecosystem
By storing data in open formats like Parquet and Apache Iceberg, OLake offers:
- Flexibility: Seamless integration with popular query engines such as Spark, Trino, Flink, and even Snowflake external tables.
- Avoidance of Vendor Lock-In: Your data remains accessible and queryable regardless of the analytical tool you choose.
- Real-Time Data Replication: Near real-time updates through each Database's change streams keep your lakehouse data fresh.
Performance Benchmarks*
- MongoDB Connector: Syncs 35,694 records/sec; 230 million rows in 46 minutes for a 664 GB dataset (20× Airbyte, 15× Embedded Debezium, 6× Fivetran) -> (See Detailed Benchmark)
- Postgres Connector: Syncs 1,000,000 records/sec for 50GB -> (See Detailed Benchmark)
- MySQL Connector: Syncs 1,000,000 records/sec for 10GB; ~209 mins for 100+GB -> (See Detailed Benchmark)
*These are preliminary performances, we'll published fully reproducible benchmark scores soon.
Future Enhancements
OLake is continually evolving. Upcoming features include:
- Enhanced Nested JSON Handling: Advanced flattening for deeper nested structures.
- Simplified Deployment: A single self-contained binary for easier setup and maintenance.
- Flexible Deployment Options: Support for Bring Your Own Cloud (BYOC), On-Prem, and multiple cloud platforms (GCP, Azure).
- Enterprise-Grade Security & Consistency: Instant transactional consistency and robust security integrations.
- Expanded Connector Support: Future connectors for Kafka, S3, Oracle and DynamoDB. Visit roadmap for more detailed info.
- Unified UI & Server Management: A centralized interface for managing all OLake features. OLake UI GitHub repo
- Schema Evolution: Support for schema changes in the source database.