Change output filename prefix for DataFrame.write()










6














Output files generated via the Spark SQL DataFrame.write() method begin with the "part" basename prefix. e.g.



DataFrame sample_07 = hiveContext.table("sample_07");
sample_07.write().parquet("sample_07_parquet");


Results in:



hdfs dfs -ls sample_07_parquet/ 
Found 4 items
-rw-r--r-- 1 rob rob 0 2016-03-19 16:40 sample_07_parquet/_SUCCESS
-rw-r--r-- 1 rob rob 491 2016-03-19 16:40 sample_07_parquet/_common_metadata
-rw-r--r-- 1 rob rob 1025 2016-03-19 16:40 sample_07_parquet/_metadata
-rw-r--r-- 1 rob rob 17194 2016-03-19 16:40 sample_07_parquet/part-r-00000-cefb2ac6-9f44-4ce4-93d9-8e7de3f2cb92.gz.parquet


I would like to change the output filename prefix used when creating a file using Spark SQL DataFrame.write(). I tried setting the "mapreduce.output.basename" property on the hadoop configuration for the Spark context. e.g.



public class MyJavaSparkSQL {

public static void main(String args) throws Exception
SparkConf sparkConf = new SparkConf().setAppName("MyJavaSparkSQL");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
ctx.hadoopConfiguration().set("mapreduce.output.basename", "myprefix");
HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc());
DataFrame sample_07 = hiveContext.table("sample_07");
sample_07.write().parquet("sample_07_parquet");
ctx.stop();



That did not change the output filename prefix for the generated files.



Is there a way to override the output filename prefix when using the DataFrame.write() method?










share|improve this question




























    6














    Output files generated via the Spark SQL DataFrame.write() method begin with the "part" basename prefix. e.g.



    DataFrame sample_07 = hiveContext.table("sample_07");
    sample_07.write().parquet("sample_07_parquet");


    Results in:



    hdfs dfs -ls sample_07_parquet/ 
    Found 4 items
    -rw-r--r-- 1 rob rob 0 2016-03-19 16:40 sample_07_parquet/_SUCCESS
    -rw-r--r-- 1 rob rob 491 2016-03-19 16:40 sample_07_parquet/_common_metadata
    -rw-r--r-- 1 rob rob 1025 2016-03-19 16:40 sample_07_parquet/_metadata
    -rw-r--r-- 1 rob rob 17194 2016-03-19 16:40 sample_07_parquet/part-r-00000-cefb2ac6-9f44-4ce4-93d9-8e7de3f2cb92.gz.parquet


    I would like to change the output filename prefix used when creating a file using Spark SQL DataFrame.write(). I tried setting the "mapreduce.output.basename" property on the hadoop configuration for the Spark context. e.g.



    public class MyJavaSparkSQL {

    public static void main(String args) throws Exception
    SparkConf sparkConf = new SparkConf().setAppName("MyJavaSparkSQL");
    JavaSparkContext ctx = new JavaSparkContext(sparkConf);
    ctx.hadoopConfiguration().set("mapreduce.output.basename", "myprefix");
    HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc());
    DataFrame sample_07 = hiveContext.table("sample_07");
    sample_07.write().parquet("sample_07_parquet");
    ctx.stop();



    That did not change the output filename prefix for the generated files.



    Is there a way to override the output filename prefix when using the DataFrame.write() method?










    share|improve this question


























      6












      6








      6


      1





      Output files generated via the Spark SQL DataFrame.write() method begin with the "part" basename prefix. e.g.



      DataFrame sample_07 = hiveContext.table("sample_07");
      sample_07.write().parquet("sample_07_parquet");


      Results in:



      hdfs dfs -ls sample_07_parquet/ 
      Found 4 items
      -rw-r--r-- 1 rob rob 0 2016-03-19 16:40 sample_07_parquet/_SUCCESS
      -rw-r--r-- 1 rob rob 491 2016-03-19 16:40 sample_07_parquet/_common_metadata
      -rw-r--r-- 1 rob rob 1025 2016-03-19 16:40 sample_07_parquet/_metadata
      -rw-r--r-- 1 rob rob 17194 2016-03-19 16:40 sample_07_parquet/part-r-00000-cefb2ac6-9f44-4ce4-93d9-8e7de3f2cb92.gz.parquet


      I would like to change the output filename prefix used when creating a file using Spark SQL DataFrame.write(). I tried setting the "mapreduce.output.basename" property on the hadoop configuration for the Spark context. e.g.



      public class MyJavaSparkSQL {

      public static void main(String args) throws Exception
      SparkConf sparkConf = new SparkConf().setAppName("MyJavaSparkSQL");
      JavaSparkContext ctx = new JavaSparkContext(sparkConf);
      ctx.hadoopConfiguration().set("mapreduce.output.basename", "myprefix");
      HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc());
      DataFrame sample_07 = hiveContext.table("sample_07");
      sample_07.write().parquet("sample_07_parquet");
      ctx.stop();



      That did not change the output filename prefix for the generated files.



      Is there a way to override the output filename prefix when using the DataFrame.write() method?










      share|improve this question















      Output files generated via the Spark SQL DataFrame.write() method begin with the "part" basename prefix. e.g.



      DataFrame sample_07 = hiveContext.table("sample_07");
      sample_07.write().parquet("sample_07_parquet");


      Results in:



      hdfs dfs -ls sample_07_parquet/ 
      Found 4 items
      -rw-r--r-- 1 rob rob 0 2016-03-19 16:40 sample_07_parquet/_SUCCESS
      -rw-r--r-- 1 rob rob 491 2016-03-19 16:40 sample_07_parquet/_common_metadata
      -rw-r--r-- 1 rob rob 1025 2016-03-19 16:40 sample_07_parquet/_metadata
      -rw-r--r-- 1 rob rob 17194 2016-03-19 16:40 sample_07_parquet/part-r-00000-cefb2ac6-9f44-4ce4-93d9-8e7de3f2cb92.gz.parquet


      I would like to change the output filename prefix used when creating a file using Spark SQL DataFrame.write(). I tried setting the "mapreduce.output.basename" property on the hadoop configuration for the Spark context. e.g.



      public class MyJavaSparkSQL {

      public static void main(String args) throws Exception
      SparkConf sparkConf = new SparkConf().setAppName("MyJavaSparkSQL");
      JavaSparkContext ctx = new JavaSparkContext(sparkConf);
      ctx.hadoopConfiguration().set("mapreduce.output.basename", "myprefix");
      HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc());
      DataFrame sample_07 = hiveContext.table("sample_07");
      sample_07.write().parquet("sample_07_parquet");
      ctx.stop();



      That did not change the output filename prefix for the generated files.



      Is there a way to override the output filename prefix when using the DataFrame.write() method?







      java apache-spark mapreduce apache-spark-sql






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 15 '16 at 16:43









      VishAmdi

      329115




      329115










      asked Mar 19 '16 at 21:46









      Rob

      3313




      3313






















          2 Answers
          2






          active

          oldest

          votes


















          7














          You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:



          private val recordWriter: RecordWriter[Void, InternalRow] = {
          val outputFormat =
          new ParquetOutputFormat[InternalRow]()
          // ...
          override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path =
          // ..
          // prefix is hard-coded here:
          new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")





          If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).






          share|improve this answer




























            0














            Assuming that the output folder have only one csv file in it, we can rename this grammatically (or dynamically) using the below code. In the below code (last line), get all files from the output directory with csv type and rename that to a desired file name.



            import org.apache.hadoop.fs.FileSystem, Path
            import org.apache.hadoop.conf.Configuration
            val outputfolder_Path = "s3://<s3_AccessKey>:<s3_Securitykey>@<external_bucket>/<path>"
            val fs = FileSystem.get(new java.net.URI(outputfolder_Path), new Configuration())
            fs.globStatus(new Path(outputfolder_Path + "/*.*")).filter(_.getPath.toString.split("/").last.split("\.").last == "csv").foreachl=> fs.rename(new Path(l.getPath.toString), new Path(outputfolder_Path + "/DesiredFilename.csv"))





            share|improve this answer




















              Your Answer






              StackExchange.ifUsing("editor", function ()
              StackExchange.using("externalEditor", function ()
              StackExchange.using("snippets", function ()
              StackExchange.snippets.init();
              );
              );
              , "code-snippets");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "1"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f36107581%2fchange-output-filename-prefix-for-dataframe-write%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              7














              You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:



              private val recordWriter: RecordWriter[Void, InternalRow] = {
              val outputFormat =
              new ParquetOutputFormat[InternalRow]()
              // ...
              override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path =
              // ..
              // prefix is hard-coded here:
              new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")





              If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).






              share|improve this answer

























                7














                You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:



                private val recordWriter: RecordWriter[Void, InternalRow] = {
                val outputFormat =
                new ParquetOutputFormat[InternalRow]()
                // ...
                override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path =
                // ..
                // prefix is hard-coded here:
                new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")





                If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).






                share|improve this answer























                  7












                  7








                  7






                  You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:



                  private val recordWriter: RecordWriter[Void, InternalRow] = {
                  val outputFormat =
                  new ParquetOutputFormat[InternalRow]()
                  // ...
                  override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path =
                  // ..
                  // prefix is hard-coded here:
                  new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")





                  If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).






                  share|improve this answer












                  You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:



                  private val recordWriter: RecordWriter[Void, InternalRow] = {
                  val outputFormat =
                  new ParquetOutputFormat[InternalRow]()
                  // ...
                  override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path =
                  // ..
                  // prefix is hard-coded here:
                  new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")





                  If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Mar 19 '16 at 23:12









                  Tzach Zohar

                  28.6k34057




                  28.6k34057























                      0














                      Assuming that the output folder have only one csv file in it, we can rename this grammatically (or dynamically) using the below code. In the below code (last line), get all files from the output directory with csv type and rename that to a desired file name.



                      import org.apache.hadoop.fs.FileSystem, Path
                      import org.apache.hadoop.conf.Configuration
                      val outputfolder_Path = "s3://<s3_AccessKey>:<s3_Securitykey>@<external_bucket>/<path>"
                      val fs = FileSystem.get(new java.net.URI(outputfolder_Path), new Configuration())
                      fs.globStatus(new Path(outputfolder_Path + "/*.*")).filter(_.getPath.toString.split("/").last.split("\.").last == "csv").foreachl=> fs.rename(new Path(l.getPath.toString), new Path(outputfolder_Path + "/DesiredFilename.csv"))





                      share|improve this answer

























                        0














                        Assuming that the output folder have only one csv file in it, we can rename this grammatically (or dynamically) using the below code. In the below code (last line), get all files from the output directory with csv type and rename that to a desired file name.



                        import org.apache.hadoop.fs.FileSystem, Path
                        import org.apache.hadoop.conf.Configuration
                        val outputfolder_Path = "s3://<s3_AccessKey>:<s3_Securitykey>@<external_bucket>/<path>"
                        val fs = FileSystem.get(new java.net.URI(outputfolder_Path), new Configuration())
                        fs.globStatus(new Path(outputfolder_Path + "/*.*")).filter(_.getPath.toString.split("/").last.split("\.").last == "csv").foreachl=> fs.rename(new Path(l.getPath.toString), new Path(outputfolder_Path + "/DesiredFilename.csv"))





                        share|improve this answer























                          0












                          0








                          0






                          Assuming that the output folder have only one csv file in it, we can rename this grammatically (or dynamically) using the below code. In the below code (last line), get all files from the output directory with csv type and rename that to a desired file name.



                          import org.apache.hadoop.fs.FileSystem, Path
                          import org.apache.hadoop.conf.Configuration
                          val outputfolder_Path = "s3://<s3_AccessKey>:<s3_Securitykey>@<external_bucket>/<path>"
                          val fs = FileSystem.get(new java.net.URI(outputfolder_Path), new Configuration())
                          fs.globStatus(new Path(outputfolder_Path + "/*.*")).filter(_.getPath.toString.split("/").last.split("\.").last == "csv").foreachl=> fs.rename(new Path(l.getPath.toString), new Path(outputfolder_Path + "/DesiredFilename.csv"))





                          share|improve this answer












                          Assuming that the output folder have only one csv file in it, we can rename this grammatically (or dynamically) using the below code. In the below code (last line), get all files from the output directory with csv type and rename that to a desired file name.



                          import org.apache.hadoop.fs.FileSystem, Path
                          import org.apache.hadoop.conf.Configuration
                          val outputfolder_Path = "s3://<s3_AccessKey>:<s3_Securitykey>@<external_bucket>/<path>"
                          val fs = FileSystem.get(new java.net.URI(outputfolder_Path), new Configuration())
                          fs.globStatus(new Path(outputfolder_Path + "/*.*")).filter(_.getPath.toString.split("/").last.split("\.").last == "csv").foreachl=> fs.rename(new Path(l.getPath.toString), new Path(outputfolder_Path + "/DesiredFilename.csv"))






                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Nov 11 at 7:19









                          Sarath Avanavu

                          10.6k74162




                          10.6k74162



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              To learn more, see our tips on writing great answers.





                              Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                              Please pay close attention to the following guidance:


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f36107581%2fchange-output-filename-prefix-for-dataframe-write%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Use pre created SQLite database for Android project in kotlin

                              Darth Vader #20

                              Ondo