Exploding Array in Batches of size 'n'
Looking to explode a nested array w/ Spark into batches. The column below is a nested array from an XML files. Now attempting to write the time series data into batches in order to write over to a NoSQL database. For example:
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4],[5,6]] |
+-------+-----------------------+
Output with batches of size 2
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4]] |
+-------+-----------------------+
| A| [[5,6]] |
+-------+-----------------------+
apache-spark pyspark
add a comment |
Looking to explode a nested array w/ Spark into batches. The column below is a nested array from an XML files. Now attempting to write the time series data into batches in order to write over to a NoSQL database. For example:
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4],[5,6]] |
+-------+-----------------------+
Output with batches of size 2
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4]] |
+-------+-----------------------+
| A| [[5,6]] |
+-------+-----------------------+
apache-spark pyspark
3
can you share the schema of your input dataframe and if possible of the expected dataframe?
– Ramesh Maharjan
May 16 '18 at 3:09
add a comment |
Looking to explode a nested array w/ Spark into batches. The column below is a nested array from an XML files. Now attempting to write the time series data into batches in order to write over to a NoSQL database. For example:
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4],[5,6]] |
+-------+-----------------------+
Output with batches of size 2
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4]] |
+-------+-----------------------+
| A| [[5,6]] |
+-------+-----------------------+
apache-spark pyspark
Looking to explode a nested array w/ Spark into batches. The column below is a nested array from an XML files. Now attempting to write the time series data into batches in order to write over to a NoSQL database. For example:
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4],[5,6]] |
+-------+-----------------------+
Output with batches of size 2
+-------+-----------------------+
| ID | Example |
+-------+-----------------------+
| A| [[1,2],[3,4]] |
+-------+-----------------------+
| A| [[5,6]] |
+-------+-----------------------+
apache-spark pyspark
apache-spark pyspark
edited May 15 '18 at 21:18
Trace Smith
asked May 15 '18 at 21:11
Trace SmithTrace Smith
3514
3514
3
can you share the schema of your input dataframe and if possible of the expected dataframe?
– Ramesh Maharjan
May 16 '18 at 3:09
add a comment |
3
can you share the schema of your input dataframe and if possible of the expected dataframe?
– Ramesh Maharjan
May 16 '18 at 3:09
3
3
can you share the schema of your input dataframe and if possible of the expected dataframe?
– Ramesh Maharjan
May 16 '18 at 3:09
can you share the schema of your input dataframe and if possible of the expected dataframe?
– Ramesh Maharjan
May 16 '18 at 3:09
add a comment |
1 Answer
1
active
oldest
votes
For Spark v 2.1+
You can take advantage of pyspark.sql.functions.posexplode()
to explode your column along with the index it appears in your array and then divide the resultant position by n
to create groups.
For example, here is the output of using posexplode()
on your DataFrame:
import pyspark.sql.functions as f
df.select('ID', f.posexplode('Example')).show()
#+---+---+------+
#| ID|pos| col|
#+---+---+------+
#| A| 0|[1, 2]|
#| A| 1|[3, 4]|
#| A| 2|[5, 6]|
#+---+---+------+
Notice that we get two columns: pos
and col
instead of just one. Since we want groups of n
, we can simply divide the pos
by n
and take the floor
to get groups.
n = 2
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.show(truncate=False)
#+---+---+------+-----+
#|ID |pos|col |group|
#+---+---+------+-----+
#|A |0 |[1, 2]|0 |
#|A |1 |[3, 4]|0 |
#|A |2 |[5, 6]|1 |
#+---+---+------+-----+
Now group by the "ID"
and the "group"
and use pyspark.sql.functions.collect_list()
to get your desired output.
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.groupBy("ID", "group")
.agg(f.collect_list("col").alias("Example"))
.sort("group")
.drop("group")
.show(truncate=False)
#+---+----------------------------------------+
#|ID |Example |
#+---+----------------------------------------+
#|A |[WrappedArray(1, 2), WrappedArray(3, 4)]|
#|A |[WrappedArray(5, 6)] |
#+---+----------------------------------------+
You'll see that I also sorted by the "group"
column and dropped it, but this is optional depending on your needs.
For Older Versions of Spark
There are some other methods for Spark versions below 2.1. All of these methods produce the same output as above.
1. Using udf
You can use a udf
to break your array into groups. For example:
def get_groups(array, n):
return filter(lambda x: x, [array[i*n:(i+1)*n] for i in range(len(array))])
get_groups_of_2 = f.udf(
lambda x: get_groups(x, 2),
ArrayType(ArrayType(ArrayType(IntegerType())))
)
df.select("ID", f.explode(get_groups_of_2("Example")).alias("Example"))
.show(truncate=False)
The get_groups()
function will take an array and return an array of groups of n elements.
2. Using rdd
Another option is to serialize to rdd
and use the get_groups()
function inside of a call to map()
. Then convert back to a DataFrame. You'll have to specify the schema for this conversion to work properly.
n = 2
schema = StructType(
[
StructField("ID", StringType()),
StructField("Example", ArrayType(ArrayType(ArrayType(IntegerType()))))
]
)
df.rdd.map(lambda x: (x["ID"], get_groups(x["Example"], n=n)))
.toDF(schema)
.select("ID", f.explode("Example").alias("Example"))
.show(truncate=False)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f50359239%2fexploding-array-in-batches-of-size-n%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
For Spark v 2.1+
You can take advantage of pyspark.sql.functions.posexplode()
to explode your column along with the index it appears in your array and then divide the resultant position by n
to create groups.
For example, here is the output of using posexplode()
on your DataFrame:
import pyspark.sql.functions as f
df.select('ID', f.posexplode('Example')).show()
#+---+---+------+
#| ID|pos| col|
#+---+---+------+
#| A| 0|[1, 2]|
#| A| 1|[3, 4]|
#| A| 2|[5, 6]|
#+---+---+------+
Notice that we get two columns: pos
and col
instead of just one. Since we want groups of n
, we can simply divide the pos
by n
and take the floor
to get groups.
n = 2
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.show(truncate=False)
#+---+---+------+-----+
#|ID |pos|col |group|
#+---+---+------+-----+
#|A |0 |[1, 2]|0 |
#|A |1 |[3, 4]|0 |
#|A |2 |[5, 6]|1 |
#+---+---+------+-----+
Now group by the "ID"
and the "group"
and use pyspark.sql.functions.collect_list()
to get your desired output.
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.groupBy("ID", "group")
.agg(f.collect_list("col").alias("Example"))
.sort("group")
.drop("group")
.show(truncate=False)
#+---+----------------------------------------+
#|ID |Example |
#+---+----------------------------------------+
#|A |[WrappedArray(1, 2), WrappedArray(3, 4)]|
#|A |[WrappedArray(5, 6)] |
#+---+----------------------------------------+
You'll see that I also sorted by the "group"
column and dropped it, but this is optional depending on your needs.
For Older Versions of Spark
There are some other methods for Spark versions below 2.1. All of these methods produce the same output as above.
1. Using udf
You can use a udf
to break your array into groups. For example:
def get_groups(array, n):
return filter(lambda x: x, [array[i*n:(i+1)*n] for i in range(len(array))])
get_groups_of_2 = f.udf(
lambda x: get_groups(x, 2),
ArrayType(ArrayType(ArrayType(IntegerType())))
)
df.select("ID", f.explode(get_groups_of_2("Example")).alias("Example"))
.show(truncate=False)
The get_groups()
function will take an array and return an array of groups of n elements.
2. Using rdd
Another option is to serialize to rdd
and use the get_groups()
function inside of a call to map()
. Then convert back to a DataFrame. You'll have to specify the schema for this conversion to work properly.
n = 2
schema = StructType(
[
StructField("ID", StringType()),
StructField("Example", ArrayType(ArrayType(ArrayType(IntegerType()))))
]
)
df.rdd.map(lambda x: (x["ID"], get_groups(x["Example"], n=n)))
.toDF(schema)
.select("ID", f.explode("Example").alias("Example"))
.show(truncate=False)
add a comment |
For Spark v 2.1+
You can take advantage of pyspark.sql.functions.posexplode()
to explode your column along with the index it appears in your array and then divide the resultant position by n
to create groups.
For example, here is the output of using posexplode()
on your DataFrame:
import pyspark.sql.functions as f
df.select('ID', f.posexplode('Example')).show()
#+---+---+------+
#| ID|pos| col|
#+---+---+------+
#| A| 0|[1, 2]|
#| A| 1|[3, 4]|
#| A| 2|[5, 6]|
#+---+---+------+
Notice that we get two columns: pos
and col
instead of just one. Since we want groups of n
, we can simply divide the pos
by n
and take the floor
to get groups.
n = 2
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.show(truncate=False)
#+---+---+------+-----+
#|ID |pos|col |group|
#+---+---+------+-----+
#|A |0 |[1, 2]|0 |
#|A |1 |[3, 4]|0 |
#|A |2 |[5, 6]|1 |
#+---+---+------+-----+
Now group by the "ID"
and the "group"
and use pyspark.sql.functions.collect_list()
to get your desired output.
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.groupBy("ID", "group")
.agg(f.collect_list("col").alias("Example"))
.sort("group")
.drop("group")
.show(truncate=False)
#+---+----------------------------------------+
#|ID |Example |
#+---+----------------------------------------+
#|A |[WrappedArray(1, 2), WrappedArray(3, 4)]|
#|A |[WrappedArray(5, 6)] |
#+---+----------------------------------------+
You'll see that I also sorted by the "group"
column and dropped it, but this is optional depending on your needs.
For Older Versions of Spark
There are some other methods for Spark versions below 2.1. All of these methods produce the same output as above.
1. Using udf
You can use a udf
to break your array into groups. For example:
def get_groups(array, n):
return filter(lambda x: x, [array[i*n:(i+1)*n] for i in range(len(array))])
get_groups_of_2 = f.udf(
lambda x: get_groups(x, 2),
ArrayType(ArrayType(ArrayType(IntegerType())))
)
df.select("ID", f.explode(get_groups_of_2("Example")).alias("Example"))
.show(truncate=False)
The get_groups()
function will take an array and return an array of groups of n elements.
2. Using rdd
Another option is to serialize to rdd
and use the get_groups()
function inside of a call to map()
. Then convert back to a DataFrame. You'll have to specify the schema for this conversion to work properly.
n = 2
schema = StructType(
[
StructField("ID", StringType()),
StructField("Example", ArrayType(ArrayType(ArrayType(IntegerType()))))
]
)
df.rdd.map(lambda x: (x["ID"], get_groups(x["Example"], n=n)))
.toDF(schema)
.select("ID", f.explode("Example").alias("Example"))
.show(truncate=False)
add a comment |
For Spark v 2.1+
You can take advantage of pyspark.sql.functions.posexplode()
to explode your column along with the index it appears in your array and then divide the resultant position by n
to create groups.
For example, here is the output of using posexplode()
on your DataFrame:
import pyspark.sql.functions as f
df.select('ID', f.posexplode('Example')).show()
#+---+---+------+
#| ID|pos| col|
#+---+---+------+
#| A| 0|[1, 2]|
#| A| 1|[3, 4]|
#| A| 2|[5, 6]|
#+---+---+------+
Notice that we get two columns: pos
and col
instead of just one. Since we want groups of n
, we can simply divide the pos
by n
and take the floor
to get groups.
n = 2
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.show(truncate=False)
#+---+---+------+-----+
#|ID |pos|col |group|
#+---+---+------+-----+
#|A |0 |[1, 2]|0 |
#|A |1 |[3, 4]|0 |
#|A |2 |[5, 6]|1 |
#+---+---+------+-----+
Now group by the "ID"
and the "group"
and use pyspark.sql.functions.collect_list()
to get your desired output.
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.groupBy("ID", "group")
.agg(f.collect_list("col").alias("Example"))
.sort("group")
.drop("group")
.show(truncate=False)
#+---+----------------------------------------+
#|ID |Example |
#+---+----------------------------------------+
#|A |[WrappedArray(1, 2), WrappedArray(3, 4)]|
#|A |[WrappedArray(5, 6)] |
#+---+----------------------------------------+
You'll see that I also sorted by the "group"
column and dropped it, but this is optional depending on your needs.
For Older Versions of Spark
There are some other methods for Spark versions below 2.1. All of these methods produce the same output as above.
1. Using udf
You can use a udf
to break your array into groups. For example:
def get_groups(array, n):
return filter(lambda x: x, [array[i*n:(i+1)*n] for i in range(len(array))])
get_groups_of_2 = f.udf(
lambda x: get_groups(x, 2),
ArrayType(ArrayType(ArrayType(IntegerType())))
)
df.select("ID", f.explode(get_groups_of_2("Example")).alias("Example"))
.show(truncate=False)
The get_groups()
function will take an array and return an array of groups of n elements.
2. Using rdd
Another option is to serialize to rdd
and use the get_groups()
function inside of a call to map()
. Then convert back to a DataFrame. You'll have to specify the schema for this conversion to work properly.
n = 2
schema = StructType(
[
StructField("ID", StringType()),
StructField("Example", ArrayType(ArrayType(ArrayType(IntegerType()))))
]
)
df.rdd.map(lambda x: (x["ID"], get_groups(x["Example"], n=n)))
.toDF(schema)
.select("ID", f.explode("Example").alias("Example"))
.show(truncate=False)
For Spark v 2.1+
You can take advantage of pyspark.sql.functions.posexplode()
to explode your column along with the index it appears in your array and then divide the resultant position by n
to create groups.
For example, here is the output of using posexplode()
on your DataFrame:
import pyspark.sql.functions as f
df.select('ID', f.posexplode('Example')).show()
#+---+---+------+
#| ID|pos| col|
#+---+---+------+
#| A| 0|[1, 2]|
#| A| 1|[3, 4]|
#| A| 2|[5, 6]|
#+---+---+------+
Notice that we get two columns: pos
and col
instead of just one. Since we want groups of n
, we can simply divide the pos
by n
and take the floor
to get groups.
n = 2
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.show(truncate=False)
#+---+---+------+-----+
#|ID |pos|col |group|
#+---+---+------+-----+
#|A |0 |[1, 2]|0 |
#|A |1 |[3, 4]|0 |
#|A |2 |[5, 6]|1 |
#+---+---+------+-----+
Now group by the "ID"
and the "group"
and use pyspark.sql.functions.collect_list()
to get your desired output.
df.select('ID', f.posexplode('Example'))
.withColumn("group", f.floor(f.col("pos")/n))
.groupBy("ID", "group")
.agg(f.collect_list("col").alias("Example"))
.sort("group")
.drop("group")
.show(truncate=False)
#+---+----------------------------------------+
#|ID |Example |
#+---+----------------------------------------+
#|A |[WrappedArray(1, 2), WrappedArray(3, 4)]|
#|A |[WrappedArray(5, 6)] |
#+---+----------------------------------------+
You'll see that I also sorted by the "group"
column and dropped it, but this is optional depending on your needs.
For Older Versions of Spark
There are some other methods for Spark versions below 2.1. All of these methods produce the same output as above.
1. Using udf
You can use a udf
to break your array into groups. For example:
def get_groups(array, n):
return filter(lambda x: x, [array[i*n:(i+1)*n] for i in range(len(array))])
get_groups_of_2 = f.udf(
lambda x: get_groups(x, 2),
ArrayType(ArrayType(ArrayType(IntegerType())))
)
df.select("ID", f.explode(get_groups_of_2("Example")).alias("Example"))
.show(truncate=False)
The get_groups()
function will take an array and return an array of groups of n elements.
2. Using rdd
Another option is to serialize to rdd
and use the get_groups()
function inside of a call to map()
. Then convert back to a DataFrame. You'll have to specify the schema for this conversion to work properly.
n = 2
schema = StructType(
[
StructField("ID", StringType()),
StructField("Example", ArrayType(ArrayType(ArrayType(IntegerType()))))
]
)
df.rdd.map(lambda x: (x["ID"], get_groups(x["Example"], n=n)))
.toDF(schema)
.select("ID", f.explode("Example").alias("Example"))
.show(truncate=False)
edited May 21 '18 at 16:17
answered May 21 '18 at 15:51
paultpault
16.1k32552
16.1k32552
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f50359239%2fexploding-array-in-batches-of-size-n%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
can you share the schema of your input dataframe and if possible of the expected dataframe?
– Ramesh Maharjan
May 16 '18 at 3:09