新一代数据湖存储技术Apache Paimon入门Demo

news/2024/10/31 3:20:29/

目录

前言

1. 什么是 Apache Paimon

一、本地环境快速上手

1、本地Flink伪集群

2、IDEA中跑Paimon Demo

2.1 代码

2.2 IDEA中成功运行

3、IDEA中Stream读写

3.1 流写

3.2 流读(toChangeLogStream)

二、进阶:本地(IDEA)多流拼接测试

要解决的问题:

note:

1、'changelog-producer' = 'full-compaction'

(1)multiWrite代码

(2)读延迟

2、'changelog-producer' = 'lookup'

三、可能遇到的问题


前言

1. 什么是 Apache Paimon

        Apache Paimon (incubating) 是一项流式数据湖存储技术,可以为用户提供高吞吐、低延迟的数据摄入、流式订阅以及实时查询能力。

        Paimon 采用开放的数据格式和技术理念,可以与 Apache Flink / Spark / Trino 等诸多业界主流计算引擎进行对接,共同推进 Streaming Lakehouse 架构的普及和发展。

        Paimon 以湖存储的方式基于分布式文件系统管理元数据,并采用开放的 ORC、Parquet、Avro 文件格式,支持各大主流计算引擎,包括 Flink、Spark、Hive、Trino、Presto。未来会对接更多引擎,包括 Doris 和 Starrocks。

官网:https://paimon.apache.org/ 

Github:https://github.com/apache/incubator-paimon

以下为快速入门上手Paimon的example:

一、本地环境快速上手

基于paimon 0.4-SNAPSHOT (Flink 1.14.4),Flink版本太低是不支持的,paimon基于最低版本1.14.6,经尝试在Flink1.14.0是不可以的!

paimon-flink-1.14-0.4-20230504.002229-50.jar

1、本地Flink伪集群

0. 需要先下载jar包,并添加至flink的lib中;

1. 根据官网demo,启动flinksql-client,创建catalog,创建表,创建数据源(视图),insert数据到表中。

2. 通过 localhost:8081 查看 Flink UI

3. 查看filesystem数据、元数据文件

2、IDEA中跑Paimon Demo

pom依赖:

        <dependency><groupId>org.apache.paimon</groupId><artifactId>paimon-flink-1.14</artifactId><version>0.4-SNAPSHOT</version></dependency>

拉取不到的可以手动添加到本地maven仓库:

mvn install:install-file -DgroupId=org.apache.paimon -DartifactId=paimon-flink-1.14 -Dversion=0.4-SNAPSHOT -Dpackaging=jar -Dfile=D:\software\paimon-flink-1.14-0.4-20230504.002229-50.jar

2.1 代码

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;/*** @Author: YK.Leo* @Date: 2023-05-14 15:12* @Version: 1.0*/// Succeed at local !!!
public class OfficeDemoV1 {public static void main(String[] args) throws Exception {StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setParallelism(1);env.enableCheckpointing(10000l);env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");TableEnvironment tableEnv = StreamTableEnvironment.create(env);// 0. Create a Catalog and a TabletableEnv.executeSql("CREATE CATALOG my_catalog_api WITH (\n" +"    'type'='paimon',\n" +                           // todo: !!!"    'warehouse'='file:///D:/tmp/paimon'\n" +")");tableEnv.executeSql("USE CATALOG my_catalog_api");tableEnv.executeSql("CREATE TABLE IF NOT EXISTS word_count_api (\n" +"    word STRING PRIMARY KEY NOT ENFORCED,\n" +"    cnt BIGINT\n" +")");// 1. Write DatatableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS word_table_api (\n" +"    word STRING\n" +") WITH (\n" +"    'connector' = 'datagen',\n" +"    'fields.word.length' = '1'\n" +")");// tableEnv.executeSql("SET 'execution.checkpointing.interval' = '10 s'");tableEnv.executeSql("INSERT INTO word_count_api SELECT word, COUNT(*) FROM word_table_api GROUP BY word");env.execute();}
}

2.2 IDEA中成功运行

3、IDEA中Stream读写

3.1 流写

代码:

package com.study.flink.table.paimon.demo;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;/*** @Author: YK.Leo* @Date: 2023-05-17 11:11* @Version: 1.0*/// succeed at local !!!
public class OfficeStreamsWriteV2 {public static void main(String[] args) throws Exception {StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setParallelism(1);env.enableCheckpointing(10000L);env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");TableEnvironment tableEnv = StreamTableEnvironment.create(env);// 0. Create a Catalog and a TabletableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +"    'type'='paimon',\n" +                           // todo: !!!"    'warehouse'='file:///D:/tmp/paimon'\n" +")");tableEnv.executeSql("USE CATALOG my_catalog_local");tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");tableEnv.executeSql("USE local_db");// drop tbltableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"+ " uuid bigint,\n"+ " name VARCHAR(3),\n"+ " age int,\n"+ " ts TIMESTAMP(3),\n"+ " dt VARCHAR(10), \n"+ " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"+ ") PARTITIONED BY (dt) \n"+ " WITH (\n" +"    'merge-engine' = 'partial-update',\n" +"    'changelog-producer' = 'full-compaction', \n" +"    'file.format' = 'orc', \n" +"    'scan.mode' = 'compacted-full', \n" +"    'bucket' = '5', \n" +"    'sink.parallelism' = '5', \n" +"    'sequence.field' = 'ts' \n" +   // todo, to check")");// datagen ====================================================================tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +" `name` VARCHAR(3)," +" _ts1 TIMESTAMP(3)\n" +") WITH (\n" +" 'connector' = 'datagen', \n" +" 'fields.uuid.kind'='sequence',\n" +" 'fields.uuid.start'='0', \n" +" 'fields.uuid.end'='1000000', \n" +" 'rows-per-second' = '1' \n" +")");tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +" `age` int," +" _ts2 TIMESTAMP(3)\n" +") WITH (\n" +" 'connector' = 'datagen', \n" +" 'fields.uuid.kind'='sequence',\n" +" 'fields.uuid.start'='0', \n" +" 'fields.uuid.end'='1000000', \n" +" 'rows-per-second' = '1' \n" +")");////tableEnv.executeSql("insert into paimon_tbl_streams(uuid, name, _ts1) select uuid, concat(name,'_A') as name, _ts1 from source_A");//tableEnv.executeSql("insert into paimon_tbl_streams(uuid, age, _ts1) select uuid, concat(age,'_B') as age, _ts1 from source_B");StatementSet statementSet = tableEnv.createStatementSet();statementSet.addInsertSql("insert into paimon_tbl_streams(uuid, name, ts, dt) select uuid, name, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A").addInsertSql("insert into paimon_tbl_streams(uuid, age, dt) select uuid, age, date_format(_ts2,'yyyy-MM-dd') as dt from source_B");statementSet.execute();// env.execute();}
}

结果:

3.2 流读(toChangeLogStream)

代码:

package com.study.flink.table.paimon.demo;import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Schema;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.connector.ChangelogMode;
import org.apache.flink.types.Row;
import org.apache.flink.types.RowKind;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;/*** @Author: YK.Leo* @Date: 2023-05-15 18:50* @Version: 1.0*/// 流读单表OK!
public class OfficeStreamReadV1  {public static final Logger LOGGER = LogManager.getLogger(OfficeStreamReadV1.class);public static void main(String[] args) throws Exception {StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setParallelism(1);env.enableCheckpointing(10000L);env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");TableEnvironment tableEnv = StreamTableEnvironment.create(env);// 0. Create a Catalog and a TabletableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +"    'type'='paimon',\n" +                           // todo: !!!"    'warehouse'='file:///D:/tmp/paimon'\n" +")");tableEnv.executeSql("USE CATALOG my_catalog_local");tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");tableEnv.executeSql("USE local_db");// 不需要再次创建表// convert to DataStream// Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams");Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams WHERE name is not null and age is not null");// DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toChangelogStream(table);// todo : doesn't support consuming update and delete changes which is produced by node TableSourceScan// DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toDataStream(table);// 剔除 -U 数据(即:更新前的数据不需要重新发送,剔除)!!!DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toChangelogStream(table, Schema.newBuilder().primaryKey("dt","uuid").build(), ChangelogMode.upsert()).filter(new FilterFunction<Row>() {@Overridepublic boolean filter(Row row) throws Exception {boolean isNoteUpdateBefore = !(row.getKind().equals(RowKind.UPDATE_BEFORE));if (!isNoteUpdateBefore) {LOGGER.info("UPDATE_BEFORE: " + row.toString());}return isNoteUpdateBefore;}});// use this datastreamdataStream.executeAndCollect().forEachRemaining(System.out::println);env.execute();}
}

结果:

二、进阶:本地(IDEA)多流拼接测试

要解决的问题:

        多个流拥有相同的主键,每个流更新除主键外的部分字段,通过主键完成多流拼接。

note:

        如果是两个Flink Job 或者 两个 pipeline 写同一个paimon表,则直接会产生conflict,其中一条流不断exception、重启;

        可以使用 “UNION ALL” 将多个流合并为一个流,最终一个Flink job写paimon表;

        使用主键表,'merge-engine' = 'partial-update'

1、'changelog-producer' = 'full-compaction'

(1)multiWrite代码

package com.study.flink.table.paimon.multi;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;/*** @Author: YK.Leo* @Date: 2023-05-18 10:17* @Version: 1.0*/// Succeed as local !!!
// 而且不会产生conflict,跑5分钟没有任何异常(公司跑几天无异常)! 数据也可以在另一个job流读!
public class MultiStreamsUnionWriteV1 {public static void main(String[] args) throws Exception {StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setParallelism(1);env.enableCheckpointing(10*1000L);env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");TableEnvironment tableEnv = StreamTableEnvironment.create(env);// 0. Create a Catalog and a TabletableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +"    'type'='paimon',\n" +                           // todo: !!!"    'warehouse'='file:///D:/tmp/paimon'\n" +")");tableEnv.executeSql("USE CATALOG my_catalog_local");tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");tableEnv.executeSql("USE local_db");// drop & create tbltableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"+ " uuid bigint,\n"+ " name VARCHAR(3),\n"+ " age int,\n"+ " ts TIMESTAMP(3),\n"+ " dt VARCHAR(10), \n"+ " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"+ ") PARTITIONED BY (dt) \n"+ " WITH (\n" +"    'merge-engine' = 'partial-update',\n" +"    'changelog-producer' = 'full-compaction', \n" +"    'file.format' = 'orc', \n" +"    'scan.mode' = 'compacted-full', \n" +"    'bucket' = '5', \n" +"    'sink.parallelism' = '5', \n" +// "    'write_only' = 'true', \n" +"    'sequence.field' = 'ts' \n" +   // todo, to check")");// datagen ====================================================================tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +" `name` VARCHAR(3)," +" _ts1 TIMESTAMP(3)\n" +") WITH (\n" +" 'connector' = 'datagen', \n" +" 'fields.uuid.kind'='sequence',\n" +" 'fields.uuid.start'='0', \n" +" 'fields.uuid.end'='1000000', \n" +" 'rows-per-second' = '1' \n" +")");tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +" uuid bigint PRIMARY KEY NOT ENFORCED,\n" +" `age` int," +" _ts2 TIMESTAMP(3)\n" +") WITH (\n" +" 'connector' = 'datagen', \n" +" 'fields.uuid.kind'='sequence',\n" +" 'fields.uuid.start'='0', \n" +" 'fields.uuid.end'='1000000', \n" +" 'rows-per-second' = '1' \n" +")");//StatementSet statementSet = tableEnv.createStatementSet();String sqlText = "INSERT INTO paimon_tbl_streams(uuid, name, age, ts, dt) \n" +"select uuid, name, cast(null as int) as age, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A \n" +"UNION ALL \n" +"select uuid, cast(null as string) as name, age, _ts2 as ts, date_format(_ts2,'yyyy-MM-dd') as dt from source_B";statementSet.addInsertSql(sqlText);statementSet.execute();}
}

读代码同上。

(2)读延迟

        即:从client数据落到paimon,完成与server的join,再到被Flink-paimon流读到的时间延迟;

       分钟级别延迟

2、'changelog-producer' = 'lookup'

读写同上,建表时修改参数即可: changelog-producer='lookup',与此匹配的scan-mode需要分别配置为 'latest'

lookup延迟性可能会更低,但是数据质量有待验证。

note:

经测试,在企业生产环境中full-compaction模式目前一切稳定(两条join的流QPS约3K左右,延迟2-3分钟)。

         99.9%的数据延迟在2-3分钟;

        (multiWrite的checkpoint间隔为60s时)

三、可能遇到的问题

1. Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory

原因:org.codehaus.janino 依赖冲突,

办法:全部exclude掉

<exclude>org.codehaus.janino:*</exclude>

2. Caused by: java.lang.ClassNotFoundException: org.apache.flink.util.function.SerializableFunction

原因:Flink steaming版本与Flink table版本不一致 或 确实相关依赖 (这里是paimon依赖的flink版本最低为1.14.6,与1.14.0的flink不兼容)

办法:升级Flink版本到1.14.4以上

参考Flink配置:Configuration | Apache Flink

3. Caused by: java.util.ServiceConfigurationError: org.apache.flink.table.factories.Factory: Provider org.apache.flink.table.store.connector.TableStoreManagedFactory not found

在项目的META-INF/services路径下添加 Factory 文件(这样才能匹配Flink的CatalogFactory,才能创建catalog)

4. Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: No operators defined in streaming topology. Cannot execute.

已经存在tableEnv.executeSql 或者 statementSet.execute() 时就不需要再 env.execute() 了!

5. Flink SQL不能直接使用null as,需要写成 cast(null as data_type), 如 cast(null as string);

6. 如果创建paimon分区表,必须要把分区字段放在主键中!,否则建表报错:

【未完待续...】


http://www.ppmy.cn/news/83075.html

相关文章

【TI毫米波雷达笔记】IWR6843AOPEVM开箱功能测试

【TI毫米波雷达笔记】IWR6843AOPEVM开箱功能测试 我用的是IWR6843AOPEVM-G 相关资源可以在ti官网下载 要用的软件是 TI官方上位机 mmWave_Demo_Visualizer 可以用网页版 也可以用软件包 建议先上网页版看看版本支不支持对应的板子 网页版&#xff1a; dev.ti.com/gallery/…

web前端 --- CSS(03) -- 元素定位

元素定位&#xff1a;标签在页面中的位置问题 &#xff08;1&#xff09;分类 绝对定位&#xff1a;将需要的元素直接定位固定的位置 PS&#xff1a;绝对定位&#xff0c;必须指定一个相对点&#xff08;一般是父标签&#xff09;。相对的标签必须是相对定位或者绝对定位【重…

qt实现简单计算器推荐博客

Qt5简单函数计q算q器_ouening的博客-CSDN博客 基于QT的科学计算器_杜康o的博客-CSDN博客_qt科学计算器 Qt5 计算器的实现_salmonwilliam的博客-CSDN博客 Qt计算器界面的实现_洋葱汪的博客-CSDN博客_qt计算器界面设计 8.用C/C实现一个科学计算器———&#xff08;超级详细完…

EasyGBS国标GB28181视频平台添加针对H.265视频流的告警信息

EasyGBS国标视频云服务支持设备/平台通过国标GB28181协议注册接入&#xff0c;可实现视频的实时监控直播、录像、检索与回看、语音对讲、云存储、告警、平台级联等功能。平台支持将接入的视频流进行全终端、全平台分发&#xff0c;分发的视频流包括RTSP、RTMP、FLV、HLS、WebRT…

SQL基础培训14-重复记录查询

进度14-重复记录查询-SQL基础培训 知识点 create table RepeatTest ( id int, name varchar(10),

毫米波雷达模块在自动驾驶系统中的关键功能

随着自动驾驶技术的快速发展&#xff0c;毫米波雷达模块作为一项关键技术&#xff0c;为自动驾驶系统提供了重要的感知和决策能力。毫米波雷达模块通过实时探测和跟踪周围环境中的车辆、行人和障碍物&#xff0c;提供精确的距离和速度信息&#xff0c;帮助自动驾驶车辆做出准确…

Tomcat安装与启动和配置

目录 Tomcat 简介 Tomcat 安装 Tomcat 启动和配置 文件夹作用 启动&#xff0c;关闭Tomcat&#xff1b; 常见问题 配置 环境变量 IDEA中配置Tomcat Tomcat 简介 Tomcat 服务器是一个免费的开放源代码的Web 应用服务器&#xff0c;属于轻量级应用服务器&#xff0c;在…

postgrepsql字符串分函数、数组长度函数、分割符分割字符串为数组

postgrepsql字符串分函数、数组长度函数、分割符分割字符串为数组 场景需求 某张表里有存储字符传为文件名&#xff08;如下&#xff09;&#xff0c;现在有一个数据需求&#xff0c;要求查询文件的都有那些后缀 xx.jar xxx.pom xxx.aar xxx_xxx_1.6.0.jar xxx_xxx_1.5.0.po…