欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

spark学习-52-Spark的org.apache.spark.SparkException: Task not serializable

程序员文章站 2022-07-15 12:55:18
...

报错这个一般是org.apache.spark.SparkException: Task not serializable

17/12/06 14:20:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.4 KB, free 872.6 MB)
17/12/06 14:20:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.161:51006 (size: 28.4 KB, free: 873.0 MB)
17/12/06 14:20:10 INFO SparkContext: Created broadcast 0 from newAPIHadoopRDD at SparkOnHbaseSecond.java:92
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2101)
    at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)
	at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.map(RDD.scala:369)
    at org.apache.spark.api.java.JavaRDDLike$class.map(JavaRDDLike.scala:93)
    at org.apache.spark.api.java.AbstractJavaRDDLike.map(JavaRDDLike.scala:45)
    at sparlsql.hbase.www.second.SparkOnHbaseSecond.main(SparkOnHbaseSecond.java:94)
Caused by: java.io.NotSerializableException: sparlsql.hbase.www.second.SparkOnHbaseSecond
Serialization stack:
    - object not serializable (class: sparlsql.hbase.www.second.SparkOnHbaseSecond, value: sparlsql.hbase.www.second.SparkOnHbaseSecond@602ae7b6)
    - field (class: sparlsql.hbase.www.second.SparkOnHbaseSecond$1, name: val$sparkOnHbase, type: class sparlsql.hbase.www.second.SparkOnHbaseSecond)
    - object (class sparlsql.hbase.www.second.SparkOnHbaseSecond$1, sparlsql.hbase.www.second.SparkOnHbaseSecond$1@37af1f93)
    - field (class: org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, name: fun$1, type: interface org.apache.spark.api.java.function.Function)
	- object (class org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
    ... 12 more
17/12/06 14:20:10 INFO SparkContext: Invoking stop() from shutdown hook

第一种是类没序列化


public class SparkOnHbaseBack {


    private String  aa = "1234"; 
    public static void main(String[] args) throws Exception {



        SparkSession spark=SparkSession.builder()  
                .appName("lcc_java_read_hbase_register_to_table")  
                .master("local[4]")
                .getOrCreate();  



        JavaRDD<Row> personsRDD = myRDD.map(new Function<Tuple2<ImmutableBytesWritable,Result>,Row>() {

            @Override
            public Row call(Tuple2<ImmutableBytesWritable, Result> tuple) throws Exception {
                // TODO Auto-generated method stub
                // System.out.println("====tuple=========="+aa);
                这里使用了main外的字段,把字段放到这个map方法里面就好了
                或者类继承实现public class SparkOnHbaseSecond implements java.io.Serializable 类似这样


        });