0
点赞
收藏
分享

微信扫一扫

[Spark版本更新]--spark-1.6.3最新版改动

Villagers 2022-11-03 阅读 101


spark-1.6.3最新版改动

Sub-task(子任务)

  • [​​SPARK-16488​​] - Codegen variable namespace collision for pmod and partitionBy
  • [​​SPARK-16491​​] - Crc32 should use different variable names (not "checksum")
  • [​​SPARK-16514​​] - RegexExtract and RegexReplace crash on non-nullable input

Bug(修复Bug)

  • [​​SPARK-6005​​] - Flaky test: o.a.s.streaming.kafka.DirectKafkaStreamSuite.offset recovery
  • [​​SPARK-11301​​] - filter on partitioned column is case sensitive even the context is case insensitive
  • [​​SPARK-14209​​] - Application failure during preemption.
  • [​​SPARK-15541​​] - SparkContext.stop throws error
  • [​​SPARK-15606​​] - Driver hang in o.a.s.DistributedSuite on 2 core machine
  • [​​SPARK-16044​​] - input_file_name() returns empty strings in data sources based on NewHadoopRDD.
  • [​​SPARK-16077​​] - Python UDF may fail because of six
  • [​​SPARK-16078​​] - from_utc_timestamp/to_utc_timestamp may give different result in different timezone
  • [​​SPARK-16182​​] - Utils.scala -- terminateProcess() should call Process.destroyForcibly() if and only if Process.destroy() fails
  • [​​SPARK-16257​​] - spark-ec2 script not updated for 1.6.2 release
  • [​​SPARK-16313​​] - Spark should not silently drop exceptions in file listing
  • [​​SPARK-16353​​] - Intended javadoc options are not honored for Java unidoc
  • [​​SPARK-16375​​] - [Spark web UI]:The wrong value(numCompletedTasks) has been assigned to the variable numSkippedTasks
  • [​​SPARK-16385​​] - NoSuchMethodException thrown by Utils.waitForProcess
  • [​​SPARK-16409​​] - regexp_extract with optional groups causes NPE
  • [​​SPARK-16422​​] - maven 3.3.3 missing from mirror, breaks older builds
  • [​​SPARK-16440​​] - Undeleted broadcast variables in Word2Vec causing OoM for long runs
  • [​​SPARK-16489​​] - Test harness to prevent expression code generation from reusing variable names
  • [​​SPARK-16656​​] - CreateTableAsSelectSuite is flaky
  • [​​SPARK-16664​​] - Spark 1.6.2 - Persist call on Data frames with more than 200 columns is wiping out the data.
  • [​​SPARK-16751​​] - Upgrade derby to 10.12.1.1 from 10.11.1.1
  • [​​SPARK-16873​​] - force spill NPE
  • [​​SPARK-16925​​] - Spark tasks which cause JVM to exit with a zero exit code may cause app to hang in Standalone mode
  • [​​SPARK-16939​​] - Fix build error by using `Tuple1` explicitly in StringFunctionSuite
  • [​​SPARK-17003​​] - release-build.sh is missing hive-thriftserver for scala 2.11
  • [​​SPARK-17038​​] - StreamingSource reports metrics for lastCompletedBatch instead of lastReceivedBatch
  • [​​SPARK-17245​​] - NPE thrown by ClientWrapper.conf
  • [​​SPARK-17356​​] - A large Metadata filed in Alias can cause OOM when calling TreeNode.toJSON
  • [​​SPARK-17404​​] - [BRANCH-1.6] Broken test: showDF in test_sparkSQL.R
  • [​​SPARK-17418​​] - Spark release must NOT distribute Kinesis related assembly artifact
  • [​​SPARK-17465​​] - Inappropriate memory management in `org.apache.spark.storage.MemoryStore` may lead to memory leak
  • [​​SPARK-17531​​] - Don't initialize Hive Listeners for the Execution Client
  • [​​SPARK-17547​​] - Temporary shuffle data files may be leaked following exception in write
  • [​​SPARK-17617​​] - Remainder(%) expression.eval returns incorrect result
  • [​​SPARK-17618​​] - Dataframe except returns incorrect results when combined with coalesce
  • [​​SPARK-17678​​] - Spark 1.6 Scala-2.11 repl doesn't honor "spark.replClassServer.port"
  • [​​SPARK-17696​​] - Race in CoarseGrainedExecutorBackend shutdown can lead to wrong exit status
  • [​​SPARK-17721​​] - Erroneous computation in multiplication of transposed SparseMatrix with SparseVector
  • [​​SPARK-17884​​] - In the cast expression, casting from empty string to interval type throws NullPointerException

Improvement(改进)

  • [​​SPARK-2424​​] - ApplicationState.MAX_NUM_RETRY should be configurable
  • [​​SPARK-15761​​] - pyspark shell should load if PYSPARK_DRIVER_PYTHON is ipython an Python3
  • [​​SPARK-16341​​] - [SQL] In regexp_replace function column and/or column expression should also allowed as replacement.
  • [​​SPARK-16796​​] - Visible passwords on Spark environment page
  • [​​SPARK-17316​​] - Don't block StandaloneSchedulerBackend.executorRemoved
  • [​​SPARK-17378​​] - Upgrade snappy-java to 1.1.2.6
  • [​​SPARK-17485​​] - Failed remote cached block reads can lead to whole job failure
  • [​​SPARK-17649​​] - Log how many Spark events got dropped in LiveListenerBus

New Feature(新特征)

  • [​​SPARK-16956​​] - Make ApplicationState.MAX_NUM_RETRY configurable

 

 

 

 

 

举报

相关推荐

0 条评论