背景
本文基于 Spark 3.5.0
 写本篇文章的目的是在于能够配合spark.sql.maxConcurrentOutputFileWriters参数来加速写parquet文件的速度,为此研究一下Spark写parquet的时候会占用内存的大小,便于配置spark.sql.maxConcurrentOutputFileWriters的值,从而保证任务的稳定性
结论
一个spark parquet writer可能会占用128MB的内存(也就是parquet.block.size的大小)。 所有在调整spark.sql.maxConcurrentOutputFileWriters的时候得注意不能调整过大,否则会导致OOM,但是如果在最后写文件的时候加入合并小文件的功能(AQE+Rebalance的方式),也可以适当的调整大一点,因为这个时候的Task 不像没有shuffle一样,可能还会涉及到sort以及aggregate等消耗内存的操作,(这个时候就是一个task纯写parquet文件)
 大家也可以参考Parquet文件是怎么被写入的-Row Groups,Pages,需要的内存,以及flush操作
 
分析
还是得从InsertIntoHadoopFsRelationCommand类中说起,涉及到写parquet的数据流如下:
InsertIntoHadoopFsRelationCommand.run
        ||
        \/
FileFormatWriter.write
        ||
        \/
fileFormat.prepareWrite
        ||
        \/
executeWrite => planForWrites.executeWrite 
                        ||
                        \/
               WriteFilesExec.doExecuteWrite
                        ||
                        \/
               FileFormatWriter.executeTask
                        ||
                        \/
               dataWriter.writeWithIterator
                        ||
                        \/
               dataWriter.writeWithMetrics
                        ||
                        \/
               DynamicPartitionDataConcurrentWriter.write
                        ||
                        \/
                   writeRecord
                        ||
                        \/
               ParquetOutputWriter.write
                        ||
                        \/
               recordWriter.write
 
-  
其中
fileFormat.prepareWrite涉及到 spark这一层级有关parquet的设置,并返回一个生成ParquetOutputWriter实例的工厂类实例OutputWriterFactory
主要设置如parquet.compression压缩格式,一般是 zstd ,也可以通过spark.sql.parquet.compression.codec设置
parquet.write.support.class为ParquetWriteSupport,该类的作用为Spark把内部IternalRow转为parquet message -  
DynamicPartitionDataConcurrentWriter.write涉及到了InternalRow到UnsafeRow代码生成
这里不讨论这部分的细节,只说一下getPartitionValues和renewCurrentWriter 方法中的 getPartitionPath这两部分-  
getPartitionValues
这个是InternalRow => UnsafeRow转换,为什么这么做,是因为对于UnsafeRow这种数据结构来说,能够很好管理内存和避免GC问题val proj = UnsafeProjection.create(description.partitionColumns, description.allColumns) row => proj(row)我们以UnsafeProjection的子类InterpretedUnsafeProjection,该类不是代码生成的类(这样便于分析),
override def apply(row: InternalRow): UnsafeRow = { if (subExprEliminationEnabled) { runtime.setInput(row) } // Put the expression results in the intermediate row. var i = 0 while (i < numFields) { values(i) = exprs(i).eval(row) i += 1 } // Write the intermediate row to an unsafe row. rowWriter.reset() writer(intermediate) rowWriter.getRow() }- 首先是消除公共子表达式
 - 用values数组保存每个表达式计算出来的结果
 rowWriter.reset()用来对齐cursor,便于对于String类型的写入,这可以参考UnsafeRow内存布局和代码优化unsafeWriter按照不同的类型写入到unsaferow不同的位置上去,这里的offset在cursor的内部的,也就是说cursor的值要大于offset- 返回UnsafeRow类型
通过这种方式完成了InternalRow => UnsafeRow转换 
 -  
getPartitionPath
这个是通过表达式的方式获取partition的函数,从而完成InternalRow => String的转换,涉及的代码如下:private lazy val partitionPathExpression: Expression = Concat( description.partitionColumns.zipWithIndex.flatMap { case (c, i) => val partitionName = ScalaUDF( ExternalCatalogUtils.getPartitionPathString _, StringType, Seq(Literal(c.name), Cast(c, StringType, Option(description.timeZoneId)))) if (i == 0) Seq(partitionName) else Seq(Literal(Path.SEPARATOR), partitionName) }) private lazy val getPartitionPath: InternalRow => String = { val proj = UnsafeProjection.create(Seq(partitionPathExpression), description.partitionColumns) row => proj(row).getString(0) }UnsafeProjection.create 上面已经说了怎么实现的了,重点说
partitionPathExpression生成partition的表达式,
该表达式主要通过UDF中getPartitionPathString来生成,关键的一点是,传入的参数:Literal(c.name)和Cast(c, StringType, Option(description.timeZoneId))))
这里的Literal(c.name)表示的是partition名字的常量
Cast(c, StringType, Option(description.timeZoneId)))表示的是c这个变量所代表的值,
为什么这么说,因为在ScalaUDF的内部计算方法中有:override def eval(input: InternalRow): Any = { val result = try { f(input) } catch { case e: Exception => throw QueryExecutionErrors.failedExecuteUserDefinedFunctionError( functionName, inputTypesString, outputType, e) } resultConverter(result) }这里的
f中会对传入的每个参数都会调用eval(InernalRow),对于Literal来说就是常亮,而对于Cast(Attribute)来说就是属性的值(通过BindReferences.bindReference方法)。 
 -  
 -  
recordWriter.write涉及到ParquetOutputFormat.getRecordWriter方法,该方法中涉及到parquet中的一些原生参数设置: 
public RecordWriter<Void, T> getRecordWriter(Configuration conf, Path file, CompressionCodecName codec, Mode mode)
        throws IOException, InterruptedException {
    final WriteSupport<T> writeSupport = getWriteSupport(conf);
    ParquetProperties.Builder propsBuilder = ParquetProperties.builder()
        .withPageSize(getPageSize(conf))
        .withDictionaryPageSize(getDictionaryPageSize(conf))
        .withDictionaryEncoding(getEnableDictionary(conf))
        .withWriterVersion(getWriterVersion(conf))
        .estimateRowCountForPageSizeCheck(getEstimatePageSizeCheck(conf))
        .withMinRowCountForPageSizeCheck(getMinRowCountForPageSizeCheck(conf))
        .withMaxRowCountForPageSizeCheck(getMaxRowCountForPageSizeCheck(conf))
        .withColumnIndexTruncateLength(getColumnIndexTruncateLength(conf))
        .withStatisticsTruncateLength(getStatisticsTruncateLength(conf))
        .withMaxBloomFilterBytes(getBloomFilterMaxBytes(conf))
        .withBloomFilterEnabled(getBloomFilterEnabled(conf))
        .withPageRowCountLimit(getPageRowCountLimit(conf))
        .withPageWriteChecksumEnabled(getPageWriteChecksumEnabled(conf));
    new ColumnConfigParser()
        .withColumnConfig(ENABLE_DICTIONARY, key -> conf.getBoolean(key, false), propsBuilder::withDictionaryEncoding)
        .withColumnConfig(BLOOM_FILTER_ENABLED, key -> conf.getBoolean(key, false),
            propsBuilder::withBloomFilterEnabled)
        .withColumnConfig(BLOOM_FILTER_EXPECTED_NDV, key -> conf.getLong(key, -1L), propsBuilder::withBloomFilterNDV)
        .withColumnConfig(BLOOM_FILTER_FPP, key -> conf.getDouble(key, ParquetProperties.DEFAULT_BLOOM_FILTER_FPP),
            propsBuilder::withBloomFilterFPP)
        .parseConfig(conf);
    ParquetProperties props = propsBuilder.build();
    long blockSize = getLongBlockSize(conf);
    int maxPaddingSize = getMaxPaddingSize(conf);
    boolean validating = getValidation(conf);
    ...
    WriteContext fileWriteContext = writeSupport.init(conf);
    FileEncryptionProperties encryptionProperties = createEncryptionProperties(conf, file, fileWriteContext);
    ParquetFileWriter w = new ParquetFileWriter(HadoopOutputFile.fromPath(file, conf),
        fileWriteContext.getSchema(), mode, blockSize, maxPaddingSize, props.getColumnIndexTruncateLength(),
        props.getStatisticsTruncateLength(), props.getPageWriteChecksumEnabled(), encryptionProperties);
    w.start();
    ...
    return new ParquetRecordWriter<T>(
        w,
        writeSupport,
        fileWriteContext.getSchema(),
        fileWriteContext.getExtraMetaData(),
        blockSize,
        codec,
        validating,
        props,
        memoryManager,
        conf);
  }
 
这里涉及到的关键的几个参数是:
   parquet.page.size                   1*1024*1024         -- page的大小 默认是 1MB
   parquet.block.size                  128*1024*1024       -- rowgroup的大小 默认是 128MB
   parquet.page.size.row.check.min     100                 -- page检查是否达到page size的最小行数
   parquet.page.size.row.check.max     10000               -- page检查是否达到page size的最大行数
   parquet.page.row.count.limit        20_000              -- page检查是否达到page size的行数极限行数
 
parquet.page.size.row.check.min parquet.page.size.row.check.max parquet.page.row.count.limit 这三个配置项存在着相互制约的关系,总的目标就是检查当行数达到了一定的阈值以后,来检查是否能够flush到内存page中,具体的可以查看ColumnWriteStoreBase类中的方法
接下来就是真正写操作了,调用的是InternalParquetRecordWriter.write方法,如下:
 private void initStore() {
    ColumnChunkPageWriteStore columnChunkPageWriteStore = new ColumnChunkPageWriteStore(compressor,
        schema, props.getAllocator(), props.getColumnIndexTruncateLength(), props.getPageWriteChecksumEnabled(),
        fileEncryptor, rowGroupOrdinal);
    pageStore = columnChunkPageWriteStore;
    bloomFilterWriteStore = columnChunkPageWriteStore;
    columnStore = props.newColumnWriteStore(schema, pageStore, bloomFilterWriteStore);
    MessageColumnIO columnIO = new ColumnIOFactory(validating).getColumnIO(schema);
    this.recordConsumer = columnIO.getRecordWriter(columnStore);
    writeSupport.prepareForWrite(recordConsumer);
  }
  public void write(T value) throws IOException, InterruptedException {
    writeSupport.write(value);
    ++ recordCount;
    checkBlockSizeReached();
  }
 
initStore主要是初始化 pageStore和columnStore
 具体的spark interalRow怎么转换为parquet message,主要在writeSupport.write中的rootFieldWriters中
 接下来就是checkBlockSizeReached,这里主要就是flush rowgroup到磁盘了,
 具体的读者可以看代码:
 对于flush到page可以看checkBlockSizeReached中columnStore.flush()
 对于flush rowroup到磁盘可以看checkBlockSizeReached中pageStore.flushToFileWriter(parquetFileWriter)
 总结出来就是 一个spark parquet writer可能会占用128MB的内存(也就是parquet.block.size的大小),
 因为只有在满足了rowgroup的大小以后,才会真正的flush到磁盘。










