Skip to content
Advertisement

UnsupportedOperationException while creating a dataset manually using Java SparkSession

I am trying to create a Dataset from Strings like below in my JUnit test.

SparkSession sparkSession = SparkSession.builder().appName("Job Test").master("local[*]").getOrCreate();

String some1_json = readFileAsString("some1.json");
String some2_json = readFileAsString("some2.json");
String id = "some_id";

List<String[]> rowStrs = new ArrayList<>();
rowStrs.add(new String[] {some_id, some1_json, some2_json});

JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
JavaRDD<Row> rowRDD = javaSparkContext.parallelize(rowStrs).map(RowFactory::create);
StructType schema = new StructType(new StructField[]{
        DataTypes.createStructField("id", DataTypes.StringType, false),
        DataTypes.createStructField("some1_json", DataTypes.StringType, false),
        DataTypes.createStructField("some2_json", DataTypes.StringType, false)});

Dataset<Row> datasetUnderTest = sparkSession.sqlContext().createDataFrame(rowRDD, schema);
datasetUnderTest.show();

But I am seeing this below error:

java.lang.ExceptionInInitializerError
    at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:103)
    at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
    at org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog$lzycompute(BaseSessionStateBuilder.scala:133)
...
....
Caused by: java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
    at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:215)
    at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2284)
...
...

What am I missing here?

My main method works fine, but this test is failing. Looks like something is not read from the classpath correctly.

Advertisement

Answer

I fixed it by excluding this below dependency from all dependencies related to Spark:

<exclusions>
    <exclusion>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
    </exclusion>
</exclusions>

Advertisement