I was trying to reproduce the example from [Databricks][1] and apply it to the new connector to Kafka and spark structured streaming however I cannot parse the JSON correctly using the out-of-the-box methods in Spark…
note: the topic is written into Kafka in JSON format.
val ds1 = spark .readStream .format("kafka") .option("kafka.bootstrap.servers", IP + ":9092") .option("zookeeper.connect", IP + ":2181") .option("subscribe", TOPIC) .option("startingOffsets", "earliest") .option("max.poll.records", 10) .option("failOnDataLoss", false) .load()
The following code won’t work, I believe that’s because the column json is a string and does not match the method from_json signature…
val df = ds1.select($"value" cast "string" as "json") .select(from_json("json") as "data") .select("data.*")
Any tips?
[UPDATE] Example working:
https://github.com/katsou55/kafka-spark-structured-streaming-example/blob/master/src/main/scala-2.11/Main.scala
Answer
First you need to define the schema for your JSON message. For example
val schema = new StructType()
.add($"id".string)
.add($"name".string)
Now you can use this schema in from_json
method like below.
val df = ds1.select($"value" cast "string" as "json")
.select(from_json($"json", schema) as "data")
.select("data.*")
Attribution
Source : Link , Question Author : carlos rodrigues , Answer Author : Jacek Laskowski