0
点赞
收藏
分享

微信扫一扫

生态扩展Spark Doris Connector

小磊z 2023-11-03 阅读 46

Spark调参过程中
保持每个task的 input + shuffle read 量在300-500M左右比较合适

The Spark UI is documented here: https://spark.apache.org/docs/3.0.1/web-ui.html

The relevant paragraph reads:

  • Input: Bytes read from storage in this stage
  • Output: Bytes written in storage in this stage
  • Shuffle read: Total shuffle bytes and records read, includes both data read locally and data read from remote executors
  • Shuffle write: Bytes and records written to disk in order to be read by a shuffle in a future stage
举报

相关推荐

0 条评论