我是 Spark-SQL 的新手.我在 Spark Dataframe 中有这样的信息
I am new to Spark-SQL. I have information in Spark Dataframe like this
Company Type Status
A X done
A Y done
A Z done
C X done
C Y done
B Y done
我想显示如下
Company X-type Y-type Z-type
A done done done
B pending done pending
C done done pending
我无法实现这是 Spark-SQL
I am not able to acheive this is Spark-SQL
请帮忙
你可以groupby
Company然后在列上使用pivot
函数类型
You can groupby
Company and then use pivot
function on column Type
这是一个简单的例子
import org.apache.spark.sql.functions._
val df = spark.sparkContext.parallelize(Seq(
("A", "X", "done"),
("A", "Y", "done"),
("A", "Z", "done"),
("C", "X", "done"),
("C", "Y", "done"),
("B", "Y", "done")
)).toDF("Company", "Type", "Status")
val result = df.groupBy("Company")
.pivot("Type")
.agg(expr("coalesce(first(Status), "pending")"))
result.show()
输出:
+-------+-------+----+-------+
|Company| X| Y| Z|
+-------+-------+----+-------+
| B|pending|done|pending|
| C| done|done|pending|
| A| done|done| done|
+-------+-------+----+-------+
您可以稍后重命名该列.
You can rename the column later.
希望这会有所帮助!