Caused by: Unable to apply CreateTableEvent to an existing schema for table "waimai.CustomerServiceMessageRecord"

Viewed 46

2024-08-02 10:44:01,039 INFO io.debezium.jdbc.JdbcConnection [] - Connection gracefully closed
2024-08-02 10:44:01,041 INFO org.apache.flink.cdc.connectors.mysql.source.assigners.MySqlSnapshotSplitAssigner [] - Start splitting table waimai.CustomerServiceMessageRecord into chunks...
2024-08-02 10:44:01,593 INFO org.apache.flink.cdc.connectors.mysql.source.assigners.MySqlChunkSplitter [] - The distribution factor of table waimai.CustomerServiceMessageRecord is 1.0163 according to the min split key 6, max split key 5263766 and approximate row count 5179631
2024-08-02 10:44:01,593 INFO org.apache.flink.cdc.connectors.mysql.source.assigners.MySqlChunkSplitter [] - The actual distribution factor for table waimai.CustomerServiceMessageRecord is 1.0163, the lower bound of evenly distribution factor is 0.05, the upper bound of evenly distribution factor is 1000.0
2024-08-02 10:44:01,593 INFO org.apache.flink.cdc.connectors.mysql.source.assigners.MySqlChunkSplitter [] - Use evenly-sized chunk optimization for table waimai.CustomerServiceMessageRecord, the approximate row count is 5179631, the chunk size is 8096, the dynamic chunk size is 8227
2024-08-02 10:44:01,597 INFO org.apache.flink.cdc.connectors.mysql.source.assigners.MySqlSnapshotSplitAssigner [] - Split table waimai.CustomerServiceMessageRecord into 640 chunks, time cost: 556ms.
2024-08-02 10:44:03,487 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - Received 1 containers.
2024-08-02 10:44:03,487 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - Received 1 containers with priority 1, 1 pending container requests.
2024-08-02 10:44:03,487 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Requested worker container_1722515335880_0009_01_000008(zodiac:35389) with resource spec WorkerResourceSpec {cpuCores=4.0, taskHeapSize=843.600mb (884578696 bytes), taskOffHeapSize=0 bytes, networkMemSize=219.920mb (230602836 bytes), managedMemSize=879.680mb (922411347 bytes), numSlots=4}.
2024-08-02 10:44:03,487 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - Removing container request Capability[<memory:2728, vCores:4>]Priority[1]AllocationRequestId[0]ExecutionTypeRequest[{Execution Type: GUARANTEED, Enforce Execution Type: false}]Resource Profile[null].
2024-08-02 10:44:03,487 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - Accepted 1 requested containers, returned 0 excess containers, 0 pending container requests of resource <memory:2728, vCores:4>.
2024-08-02 10:44:03,487 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - TaskExecutor container_1722515335880_0009_01_000008(zodiac:35389) will be started on zodiac with TaskExecutorProcessSpec {cpuCores=4.0, frameworkHeapSize=128.000mb (134217728 bytes), frameworkOffHeapSize=128.000mb (134217728 bytes), taskHeapSize=843.600mb (884578696 bytes), taskOffHeapSize=0 bytes, networkMemSize=219.920mb (230602836 bytes), managedMemorySize=879.680mb (922411347 bytes), jvmMetaspaceSize=256.000mb (268435456 bytes), jvmOverheadSize=272.800mb (286051537 bytes), numSlots=4}.
2024-08-02 10:44:03,490 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - Creating container launch context for TaskManagers
2024-08-02 10:44:03,491 INFO org.apache.flink.yarn.YarnResourceManagerDriver [] - Starting TaskManagers
2024-08-02 10:44:03,491 INFO org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl [] - Processing Event EventType: START_CONTAINER for Container container_1722515335880_0009_01_000008
2024-08-02 10:44:07,695 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Registering TaskManager with ResourceID container_1722515335880_0009_01_000008(zodiac:35389) (pekko.tcp://flink@zodiac:39721/user/rpc/taskmanager_0) at ResourceManager
2024-08-02 10:44:07,707 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager [] - Registering task executor container_1722515335880_0009_01_000008 under 1903fce232cf2ad3f99e4c0176999ecf at the slot manager.
2024-08-02 10:44:07,708 INFO org.apache.flink.runtime.resourcemanager.slotmanager.DefaultSlotStatusSyncer [] - Starting allocation of slot 829608e919035367123a76884e4cf2f9 from container_1722515335880_0009_01_000008 for job 4f5cd9db57284f113ba8ef9e641d75c2 with resource profile ResourceProfile{cpuCores=1, taskHeapMemory=210.900mb (221144674 bytes), taskOffHeapMemory=0 bytes, managedMemory=219.920mb (230602836 bytes), networkMemory=54.980mb (57650709 bytes)}.
2024-08-02 10:44:07,708 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Worker container_1722515335880_0009_01_000008(zodiac:35389) is registered.
2024-08-02 10:44:07,708 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Worker container_1722515335880_0009_01_000008(zodiac:35389) with resource spec WorkerResourceSpec {cpuCores=4.0, taskHeapSize=843.600mb (884578696 bytes), taskOffHeapSize=0 bytes, networkMemSize=219.920mb (230602836 bytes), managedMemSize=879.680mb (922411347 bytes), numSlots=4} was requested in current attempt. Current pending count after registering: 0.
2024-08-02 10:44:07,764 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Flink CDC Event Source: mysql -> SchemaOperator -> PrePartition (1/1) (34939782c56fff113298b099a827eef6_cbc357ccb763df2852fee8c4fc7d55f2_0_0) switched from SCHEDULED to DEPLOYING.
2024-08-02 10:44:07,764 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Deploying Source: Flink CDC Event Source: mysql -> SchemaOperator -> PrePartition (1/1) (attempt #0) with attempt id 34939782c56fff113298b099a827eef6_cbc357ccb763df2852fee8c4fc7d55f2_0_0 and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to container_1722515335880_0009_01_000008 @ zodiac (dataPort=39965) with allocation id 829608e919035367123a76884e4cf2f9
2024-08-02 10:44:07,765 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - PostPartition -> Sink Writer: Flink CDC Event Sink: doris (1/1) (34939782c56fff113298b099a827eef6_0deb1b26a3d9eb3c8f0c11f7110b2903_0_0) switched from SCHEDULED to DEPLOYING.
2024-08-02 10:44:07,765 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Deploying PostPartition -> Sink Writer: Flink CDC Event Sink: doris (1/1) (attempt #0) with attempt id 34939782c56fff113298b099a827eef6_0deb1b26a3d9eb3c8f0c11f7110b2903_0_0 and vertex id 0deb1b26a3d9eb3c8f0c11f7110b2903_0 to container_1722515335880_0009_01_000008 @ zodiac (dataPort=39965) with allocation id 829608e919035367123a76884e4cf2f9
2024-08-02 10:44:08,834 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - PostPartition -> Sink Writer: Flink CDC Event Sink: doris (1/1) (34939782c56fff113298b099a827eef6_0deb1b26a3d9eb3c8f0c11f7110b2903_0_0) switched from DEPLOYING to INITIALIZING.
2024-08-02 10:44:08,834 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Flink CDC Event Source: mysql -> SchemaOperator -> PrePartition (1/1) (34939782c56fff113298b099a827eef6_cbc357ccb763df2852fee8c4fc7d55f2_0_0) switched from DEPLOYING to INITIALIZING.
2024-08-02 10:44:09,295 INFO org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistryRequestHandler [] - Register sink subtask 0.
2024-08-02 10:44:09,491 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - PostPartition -> Sink Writer: Flink CDC Event Sink: doris (1/1) (34939782c56fff113298b099a827eef6_0deb1b26a3d9eb3c8f0c11f7110b2903_0_0) switched from INITIALIZING to RUNNING.
2024-08-02 10:44:09,529 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Source Source: Flink CDC Event Source: mysql registering reader for parallel task 0 (#0) @ zodiac
2024-08-02 10:44:09,529 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Source Source: Flink CDC Event Source: mysql received split request from parallel task 0 (#0)
2024-08-02 10:44:09,553 INFO org.apache.flink.cdc.connectors.mysql.source.enumerator.MySqlSourceEnumerator [] - The enumerator assigns split MySqlSnapshotSplit{tableId=waimai.CustomerServiceMessageRecord, splitId='waimai.CustomerServiceMessageRecord:0', splitKeyType=[id INT NOT NULL], splitStart=null, splitEnd=[8233], highWatermark=null} to subtask 0
2024-08-02 10:44:09,560 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Flink CDC Event Source: mysql -> SchemaOperator -> PrePartition (1/1) (34939782c56fff113298b099a827eef6_cbc357ccb763df2852fee8c4fc7d55f2_0_0) switched from INITIALIZING to RUNNING.
2024-08-02 10:44:12,883 INFO org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistryRequestHandler [] - Received schema change event request from table waimai.CustomerServiceMessageRecord. Start to buffer requests for others.
2024-08-02 10:44:12,884 INFO org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaManager [] - Handling schema change event: CreateTableEvent{tableId=waimai.CustomerServiceMessageRecord, schema=columns={id INT NOT NULL,date DATE NOT NULL,platform VARCHAR(32) NOT NULL,shopid VARCHAR(32) NOT NULL,targetId VARCHAR(64) NOT NULL,sendTime TIMESTAMP(0) NOT NULL,sessionid VARCHAR(64) NOT NULL,poi_id_str VARCHAR(64),uid VARCHAR(64),userTags STRING,chatId BIGINT,bizId VARCHAR(64),type VARCHAR(16),status VARCHAR(32)}, primaryKeys=id, options=()}
2024-08-02 10:44:12,963 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Flink CDC Event Source: mysql -> SchemaOperator -> PrePartition (1/1) (34939782c56fff113298b099a827eef6_cbc357ccb763df2852fee8c4fc7d55f2_0_0) switched from RUNNING to FAILED on container_1722515335880_0009_01_000008 @ zodiac (dataPort=39965).
java.lang.IllegalStateException: Failed to send request to coordinator: org.apache.flink.cdc.runtime.operators.schema.event.SchemaChangeRequest@3e1bc363
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.sendRequestToCoordinator(SchemaOperator.java:321) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.requestSchemaChange(SchemaOperator.java:295) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.handleSchemaChangeEvent(SchemaOperator.java:280) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.processElement(SchemaOperator.java:169) ~[?:?]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:310) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:101) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlPipelineRecordEmitter.sendCreateTableEvent(MySqlPipelineRecordEmitter.java:126) ~[?:?]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlPipelineRecordEmitter.processElement(MySqlPipelineRecordEmitter.java:110) ~[?:?]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:73) ~[?:?]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:46) ~[?:?]
at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:203) ~[flink-connector-files-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:422) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:579) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:909) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:858) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) ~[flink-dist-1.19.1.jar:1.19.1]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Unable to apply CreateTableEvent to an existing schema for table "waimai.CustomerServiceMessageRecord"
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) ~[?:?]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2005) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.sendRequestToCoordinator(SchemaOperator.java:318) ~[?:?]
... 26 more
Caused by: java.lang.IllegalArgumentException: Unable to apply CreateTableEvent to an existing schema for table "waimai.CustomerServiceMessageRecord"
at org.apache.flink.cdc.common.utils.Preconditions.checkArgument(Preconditions.java:129) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaManager.handleCreateTableEvent(SchemaManager.java:149) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaManager.applySchemaChange(SchemaManager.java:103) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaDerivation.applySchemaChange(SchemaDerivation.java:86) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistryRequestHandler.handleSchemaChangeRequest(SchemaRegistryRequestHandler.java:140) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistry.handleCoordinationRequest(SchemaRegistry.java:177) ~[?:?]
at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.deliverCoordinationRequestToCoordinator(DefaultOperatorCoordinatorHandler.java:146) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.scheduler.SchedulerBase.deliverCoordinationRequestToCoordinator(SchedulerBase.java:1093) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.jobmaster.JobMaster.sendRequestToCoordinator(JobMaster.java:622) ~[flink-dist-1.19.1.jar:1.19.1]
at jdk.internal.reflect.GeneratedMethodAccessor78.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]
2024-08-02 10:44:12,972 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - 2 tasks will be restarted to recover the failed task 34939782c56fff113298b099a827eef6_cbc357ccb763df2852fee8c4fc7d55f2_0_0.
2024-08-02 10:44:12,973 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Job mysql-sync-doris-message (4f5cd9db57284f113ba8ef9e641d75c2) switched from state RUNNING to RESTARTING.
2024-08-02 10:44:12,973 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - PostPartition -> Sink Writer: Flink CDC Event Sink: doris (1/1) (34939782c56fff113298b099a827eef6_0deb1b26a3d9eb3c8f0c11f7110b2903_0_0) switched from RUNNING to CANCELING.
2024-08-02 10:44:12,978 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Removing registered reader after failure for subtask 0 (#0) of source Source: Flink CDC Event Source: mysql.
2024-08-02 10:44:12,995 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - PostPartition -> Sink Writer: Flink CDC Event Sink: doris (1/1) (34939782c56fff113298b099a827eef6_0deb1b26a3d9eb3c8f0c11f7110b2903_0_0) switched from CANCELING to CANCELED.
2024-08-02 10:44:12,996 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManager [] - Clearing resource requirements of job 4f5cd9db57284f113ba8ef9e641d75c2
2024-08-02 10:44:12,996 INFO org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedTaskManagerTracker [] - Clear all pending allocations for job 4f5cd9db57284f113ba8ef9e641d75c2.
2024-08-02 10:44:22,979 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Job mysql-sync-doris-message (4f5cd9db57284f113ba8ef9e641d75c2) switched from state RESTARTING to RUNNING.
2024-08-02 10:44:22,979 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - No checkpoint found during restore.
2024-08-02 10:44:22,979 ERROR org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistry [] - Subtask 0 reset at checkpoint -1.
java.lang.IllegalStateException: Failed to send request to coordinator: org.apache.flink.cdc.runtime.operators.schema.event.SchemaChangeRequest@3e1bc363
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.sendRequestToCoordinator(SchemaOperator.java:321) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.requestSchemaChange(SchemaOperator.java:295) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.handleSchemaChangeEvent(SchemaOperator.java:280) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.processElement(SchemaOperator.java:169) ~[?:?]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:310) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:101) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlPipelineRecordEmitter.sendCreateTableEvent(MySqlPipelineRecordEmitter.java:126) ~[?:?]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlPipelineRecordEmitter.processElement(MySqlPipelineRecordEmitter.java:110) ~[?:?]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:73) ~[?:?]
at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlRecordEmitter.emitRecord(MySqlRecordEmitter.java:46) ~[?:?]
at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:203) ~[flink-connector-files-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:422) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:579) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:909) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:858) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) ~[flink-dist-1.19.1.jar:1.19.1]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Unable to apply CreateTableEvent to an existing schema for table "waimai.CustomerServiceMessageRecord"
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) ~[?:?]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2005) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.SchemaOperator.sendRequestToCoordinator(SchemaOperator.java:318) ~[?:?]
... 26 more
Caused by: java.lang.IllegalArgumentException: Unable to apply CreateTableEvent to an existing schema for table "waimai.CustomerServiceMessageRecord"
at org.apache.flink.cdc.common.utils.Preconditions.checkArgument(Preconditions.java:129) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaManager.handleCreateTableEvent(SchemaManager.java:149) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaManager.applySchemaChange(SchemaManager.java:103) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaDerivation.applySchemaChange(SchemaDerivation.java:86) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistryRequestHandler.handleSchemaChangeRequest(SchemaRegistryRequestHandler.java:140) ~[?:?]
at org.apache.flink.cdc.runtime.operators.schema.coordinator.SchemaRegistry.handleCoordinationRequest(SchemaRegistry.java:177) ~[?:?]
at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.deliverCoordinationRequestToCoordinator(DefaultOperatorCoordinatorHandler.java:146) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.scheduler.SchedulerBase.deliverCoordinationRequestToCoordinator(SchedulerBase.java:1093) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.jobmaster.JobMaster.sendRequestToCoordinator(JobMaster.java:622) ~[flink-dist-1.19.1.jar:1.19.1]
at jdk.internal.reflect.GeneratedMethodAccessor78.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) ~[flink-dist-1.19.1.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168) ~[flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) [flink-rpc-akkae345fd9d-8c32-4b17-86b1-9c3460d3a1cb.jar:1.19.1]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]
2024-08-02 10:44:22,985 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Recovering subtask 0 to checkpoint -1 for source Source: Flink CDC Event Source: mysql to checkpoint.

1 Answers

麻烦详细描述一下做了什么操作,遇到了什么问题,3.0是自己编译的吗?看标题日志是因为创建已存在的表而报错