Convert spark pipeline to dataframe










0















The Spark Pipeline framework allows for creation of pipelines of transforms for machine learning or other applications in a reproducible way. However, when creating the dataframes, I want to be able to perform exploratory analysis.



In my case, I have ~100 columns, of which 80 are strings and need to be one hot encoded:



from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer,VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import LogisticRegressionModel

#cols_to_one_hot_encode_2 is a list of columns that need to be one hot encoded
#cols_to_keep_as_is are columns that are **note** one hot encoded

cols_to_one_hot_encode_3=[i+"_hot" for i in cols_to_one_hot_encode_2]
encoder= OneHotEncoderEstimator(inputCols=cols_to_one_hot_encode_2,
outputCols=cols_to_one_hot_encode_3,dropLast=False)

#assemble pipeline
vectorAssembler = VectorAssembler().setInputCols(cols_to_keep_as_is+cols_to_one_hot_encode_3).setOutputCol("features")
all_stages=indexers
all_stages.append(encoder)
all_stages.append(vectorAssembler)
transformationPipeline=Pipeline(stages=all_stages)
fittedPipeline=transformationPipeline.fit(df_3)
dataset = fittedPipeline.transform(df_3)

#now pass to logistic regression
selectedcols = ["response_variable","features"] #+df_3.columns
dataset_2= dataset.select(selectedcols)

# Create initial LogisticRegression model
lr = LogisticRegression(labelCol="response_variable", featuresCol="features", maxIter=10,elasticNetParam=1)

# Train model with Training Data
lrModel = lr.fit(dataset_2)


When I look at dataset_2 display(dataset_2), it prints:



response_variable features
0 [0,6508,[1,4,53,155,166,186,205,242,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,155,165,185,207,243,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,158,170,185,206,241,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,156,168,185,205,240,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,155,166,185,205,240,2104,6225,6498],[8223,1,1,1,1,1,1,1,1,1,1]]


Which is totally useless for doing feature exploration.Notice that the one-hot encoder has exploded my features from ~100 columns to 6508.



My question



How do I look at the dataframe that iscreated under the hood by the pipeline?
This should be a dataframe that has 6058 features and the corresponding number of rows, such as:
For example, I want something like:



response_variable feature_1_hot_1 feature_1_hot_2 feature_1_hot_3 ... (6505 more columns)
0 1 1 0

etc.


Not a duplicate



Not a duplicate of How to split Vector into columns - using PySpark
That is asking how to do literal string splitting based on a delimiter. The transform done by the pipeline is not a simple string splitting. See Using Spark ML Pipelines just for Transformations










share|improve this question
























  • why the downvote?

    – Josh
    Nov 13 '18 at 22:12











  • Possible duplicate of How to split Vector into columns - using PySpark

    – user10465355
    Nov 13 '18 at 22:14











  • modifed to explain why not duplicate

    – Josh
    Nov 13 '18 at 22:19















0















The Spark Pipeline framework allows for creation of pipelines of transforms for machine learning or other applications in a reproducible way. However, when creating the dataframes, I want to be able to perform exploratory analysis.



In my case, I have ~100 columns, of which 80 are strings and need to be one hot encoded:



from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer,VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import LogisticRegressionModel

#cols_to_one_hot_encode_2 is a list of columns that need to be one hot encoded
#cols_to_keep_as_is are columns that are **note** one hot encoded

cols_to_one_hot_encode_3=[i+"_hot" for i in cols_to_one_hot_encode_2]
encoder= OneHotEncoderEstimator(inputCols=cols_to_one_hot_encode_2,
outputCols=cols_to_one_hot_encode_3,dropLast=False)

#assemble pipeline
vectorAssembler = VectorAssembler().setInputCols(cols_to_keep_as_is+cols_to_one_hot_encode_3).setOutputCol("features")
all_stages=indexers
all_stages.append(encoder)
all_stages.append(vectorAssembler)
transformationPipeline=Pipeline(stages=all_stages)
fittedPipeline=transformationPipeline.fit(df_3)
dataset = fittedPipeline.transform(df_3)

#now pass to logistic regression
selectedcols = ["response_variable","features"] #+df_3.columns
dataset_2= dataset.select(selectedcols)

# Create initial LogisticRegression model
lr = LogisticRegression(labelCol="response_variable", featuresCol="features", maxIter=10,elasticNetParam=1)

# Train model with Training Data
lrModel = lr.fit(dataset_2)


When I look at dataset_2 display(dataset_2), it prints:



response_variable features
0 [0,6508,[1,4,53,155,166,186,205,242,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,155,165,185,207,243,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,158,170,185,206,241,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,156,168,185,205,240,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,155,166,185,205,240,2104,6225,6498],[8223,1,1,1,1,1,1,1,1,1,1]]


Which is totally useless for doing feature exploration.Notice that the one-hot encoder has exploded my features from ~100 columns to 6508.



My question



How do I look at the dataframe that iscreated under the hood by the pipeline?
This should be a dataframe that has 6058 features and the corresponding number of rows, such as:
For example, I want something like:



response_variable feature_1_hot_1 feature_1_hot_2 feature_1_hot_3 ... (6505 more columns)
0 1 1 0

etc.


Not a duplicate



Not a duplicate of How to split Vector into columns - using PySpark
That is asking how to do literal string splitting based on a delimiter. The transform done by the pipeline is not a simple string splitting. See Using Spark ML Pipelines just for Transformations










share|improve this question
























  • why the downvote?

    – Josh
    Nov 13 '18 at 22:12











  • Possible duplicate of How to split Vector into columns - using PySpark

    – user10465355
    Nov 13 '18 at 22:14











  • modifed to explain why not duplicate

    – Josh
    Nov 13 '18 at 22:19













0












0








0








The Spark Pipeline framework allows for creation of pipelines of transforms for machine learning or other applications in a reproducible way. However, when creating the dataframes, I want to be able to perform exploratory analysis.



In my case, I have ~100 columns, of which 80 are strings and need to be one hot encoded:



from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer,VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import LogisticRegressionModel

#cols_to_one_hot_encode_2 is a list of columns that need to be one hot encoded
#cols_to_keep_as_is are columns that are **note** one hot encoded

cols_to_one_hot_encode_3=[i+"_hot" for i in cols_to_one_hot_encode_2]
encoder= OneHotEncoderEstimator(inputCols=cols_to_one_hot_encode_2,
outputCols=cols_to_one_hot_encode_3,dropLast=False)

#assemble pipeline
vectorAssembler = VectorAssembler().setInputCols(cols_to_keep_as_is+cols_to_one_hot_encode_3).setOutputCol("features")
all_stages=indexers
all_stages.append(encoder)
all_stages.append(vectorAssembler)
transformationPipeline=Pipeline(stages=all_stages)
fittedPipeline=transformationPipeline.fit(df_3)
dataset = fittedPipeline.transform(df_3)

#now pass to logistic regression
selectedcols = ["response_variable","features"] #+df_3.columns
dataset_2= dataset.select(selectedcols)

# Create initial LogisticRegression model
lr = LogisticRegression(labelCol="response_variable", featuresCol="features", maxIter=10,elasticNetParam=1)

# Train model with Training Data
lrModel = lr.fit(dataset_2)


When I look at dataset_2 display(dataset_2), it prints:



response_variable features
0 [0,6508,[1,4,53,155,166,186,205,242,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,155,165,185,207,243,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,158,170,185,206,241,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,156,168,185,205,240,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,155,166,185,205,240,2104,6225,6498],[8223,1,1,1,1,1,1,1,1,1,1]]


Which is totally useless for doing feature exploration.Notice that the one-hot encoder has exploded my features from ~100 columns to 6508.



My question



How do I look at the dataframe that iscreated under the hood by the pipeline?
This should be a dataframe that has 6058 features and the corresponding number of rows, such as:
For example, I want something like:



response_variable feature_1_hot_1 feature_1_hot_2 feature_1_hot_3 ... (6505 more columns)
0 1 1 0

etc.


Not a duplicate



Not a duplicate of How to split Vector into columns - using PySpark
That is asking how to do literal string splitting based on a delimiter. The transform done by the pipeline is not a simple string splitting. See Using Spark ML Pipelines just for Transformations










share|improve this question
















The Spark Pipeline framework allows for creation of pipelines of transforms for machine learning or other applications in a reproducible way. However, when creating the dataframes, I want to be able to perform exploratory analysis.



In my case, I have ~100 columns, of which 80 are strings and need to be one hot encoded:



from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer,VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import LogisticRegressionModel

#cols_to_one_hot_encode_2 is a list of columns that need to be one hot encoded
#cols_to_keep_as_is are columns that are **note** one hot encoded

cols_to_one_hot_encode_3=[i+"_hot" for i in cols_to_one_hot_encode_2]
encoder= OneHotEncoderEstimator(inputCols=cols_to_one_hot_encode_2,
outputCols=cols_to_one_hot_encode_3,dropLast=False)

#assemble pipeline
vectorAssembler = VectorAssembler().setInputCols(cols_to_keep_as_is+cols_to_one_hot_encode_3).setOutputCol("features")
all_stages=indexers
all_stages.append(encoder)
all_stages.append(vectorAssembler)
transformationPipeline=Pipeline(stages=all_stages)
fittedPipeline=transformationPipeline.fit(df_3)
dataset = fittedPipeline.transform(df_3)

#now pass to logistic regression
selectedcols = ["response_variable","features"] #+df_3.columns
dataset_2= dataset.select(selectedcols)

# Create initial LogisticRegression model
lr = LogisticRegression(labelCol="response_variable", featuresCol="features", maxIter=10,elasticNetParam=1)

# Train model with Training Data
lrModel = lr.fit(dataset_2)


When I look at dataset_2 display(dataset_2), it prints:



response_variable features
0 [0,6508,[1,4,53,155,166,186,205,242,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,155,165,185,207,243,2104,6225,6498],[8220,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,158,170,185,206,241,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,3,53,156,168,185,205,240,2104,6225,6498],[8222,1,1,1,1,1,1,1,1,1,1]]
0 [0,6508,[1,2,53,155,166,185,205,240,2104,6225,6498],[8223,1,1,1,1,1,1,1,1,1,1]]


Which is totally useless for doing feature exploration.Notice that the one-hot encoder has exploded my features from ~100 columns to 6508.



My question



How do I look at the dataframe that iscreated under the hood by the pipeline?
This should be a dataframe that has 6058 features and the corresponding number of rows, such as:
For example, I want something like:



response_variable feature_1_hot_1 feature_1_hot_2 feature_1_hot_3 ... (6505 more columns)
0 1 1 0

etc.


Not a duplicate



Not a duplicate of How to split Vector into columns - using PySpark
That is asking how to do literal string splitting based on a delimiter. The transform done by the pipeline is not a simple string splitting. See Using Spark ML Pipelines just for Transformations







apache-spark pipeline databricks






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 13 '18 at 22:18







Josh

















asked Nov 13 '18 at 22:10









JoshJosh

878




878












  • why the downvote?

    – Josh
    Nov 13 '18 at 22:12











  • Possible duplicate of How to split Vector into columns - using PySpark

    – user10465355
    Nov 13 '18 at 22:14











  • modifed to explain why not duplicate

    – Josh
    Nov 13 '18 at 22:19

















  • why the downvote?

    – Josh
    Nov 13 '18 at 22:12











  • Possible duplicate of How to split Vector into columns - using PySpark

    – user10465355
    Nov 13 '18 at 22:14











  • modifed to explain why not duplicate

    – Josh
    Nov 13 '18 at 22:19
















why the downvote?

– Josh
Nov 13 '18 at 22:12





why the downvote?

– Josh
Nov 13 '18 at 22:12













Possible duplicate of How to split Vector into columns - using PySpark

– user10465355
Nov 13 '18 at 22:14





Possible duplicate of How to split Vector into columns - using PySpark

– user10465355
Nov 13 '18 at 22:14













modifed to explain why not duplicate

– Josh
Nov 13 '18 at 22:19





modifed to explain why not duplicate

– Josh
Nov 13 '18 at 22:19












1 Answer
1






active

oldest

votes


















1















How do I look at the dataframe that iscreated under the hood by the pipeline?




There is no such hidden structure. Spark ML Pipelines are build around VectorUDT columns and metadata to enrich the structure. There is no intermediate structure that holds expanded columns, and if there where, it wouldn't scale (Spark doesn't handle wide and dense data that would be generated here, and query planner chokes when number of columns gets into tens of thousands) given the current implementation.



Splitting the columns and analyzing the metadata is your best and only option.






share|improve this answer























  • So there is no method to create such a dataframe? I find that very hard to believe.

    – Josh
    Nov 14 '18 at 14:55










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290258%2fconvert-spark-pipeline-to-dataframe%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1















How do I look at the dataframe that iscreated under the hood by the pipeline?




There is no such hidden structure. Spark ML Pipelines are build around VectorUDT columns and metadata to enrich the structure. There is no intermediate structure that holds expanded columns, and if there where, it wouldn't scale (Spark doesn't handle wide and dense data that would be generated here, and query planner chokes when number of columns gets into tens of thousands) given the current implementation.



Splitting the columns and analyzing the metadata is your best and only option.






share|improve this answer























  • So there is no method to create such a dataframe? I find that very hard to believe.

    – Josh
    Nov 14 '18 at 14:55















1















How do I look at the dataframe that iscreated under the hood by the pipeline?




There is no such hidden structure. Spark ML Pipelines are build around VectorUDT columns and metadata to enrich the structure. There is no intermediate structure that holds expanded columns, and if there where, it wouldn't scale (Spark doesn't handle wide and dense data that would be generated here, and query planner chokes when number of columns gets into tens of thousands) given the current implementation.



Splitting the columns and analyzing the metadata is your best and only option.






share|improve this answer























  • So there is no method to create such a dataframe? I find that very hard to believe.

    – Josh
    Nov 14 '18 at 14:55













1












1








1








How do I look at the dataframe that iscreated under the hood by the pipeline?




There is no such hidden structure. Spark ML Pipelines are build around VectorUDT columns and metadata to enrich the structure. There is no intermediate structure that holds expanded columns, and if there where, it wouldn't scale (Spark doesn't handle wide and dense data that would be generated here, and query planner chokes when number of columns gets into tens of thousands) given the current implementation.



Splitting the columns and analyzing the metadata is your best and only option.






share|improve this answer














How do I look at the dataframe that iscreated under the hood by the pipeline?




There is no such hidden structure. Spark ML Pipelines are build around VectorUDT columns and metadata to enrich the structure. There is no intermediate structure that holds expanded columns, and if there where, it wouldn't scale (Spark doesn't handle wide and dense data that would be generated here, and query planner chokes when number of columns gets into tens of thousands) given the current implementation.



Splitting the columns and analyzing the metadata is your best and only option.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 14 '18 at 9:50







user10651176



















  • So there is no method to create such a dataframe? I find that very hard to believe.

    – Josh
    Nov 14 '18 at 14:55

















  • So there is no method to create such a dataframe? I find that very hard to believe.

    – Josh
    Nov 14 '18 at 14:55
















So there is no method to create such a dataframe? I find that very hard to believe.

– Josh
Nov 14 '18 at 14:55





So there is no method to create such a dataframe? I find that very hard to believe.

– Josh
Nov 14 '18 at 14:55



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290258%2fconvert-spark-pipeline-to-dataframe%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Use pre created SQLite database for Android project in kotlin

Darth Vader #20

Ondo