-
ClassDescriptionsince 4.0.0 as its only usage for Python evaluation is now extinctuse UnivariateFeatureSelector instead. Since 3.1.1.use SparkListenerExecutorExcluded instead. Since 3.1.0.use SparkListenerExecutorExcludedForStage instead. Since 3.1.0.use SparkListenerExecutorUnexcluded instead. Since 3.1.0.use SparkListenerNodeExcluded instead. Since 3.1.0.use SparkListenerNodeExcludedForStage instead. Since 3.1.0.use SparkListenerNodeUnexcluded instead. Since 3.1.0.As of release 3.0.0, please use the untyped builtin aggregate functions.please use untyped builtin aggregate functions. Since 3.0.0.UserDefinedAggregateFunction is deprecated. Aggregator[IN, BUF, OUT] should now be registered as a UDF via the functions.udaf(agg) method.This is deprecated as of Spark 3.4.0. There are no longer updates to DStream and it's a legacy project. There is a newer and easier to use streaming engine in Spark called Structured Streaming. You should use Spark Structured Streaming for your streaming applications.This is deprecated as of Spark 3.4.0. There are no longer updates to DStream and it's a legacy project. There is a newer and easier to use streaming engine in Spark called Structured Streaming. You should use Spark Structured Streaming for your streaming applications.
-
FieldDescriptionuse `CHILD_CONNECTION_TIMEOUT`
-
MethodDescriptionThis method is deprecated and will be removed in future versions. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.`labels` is deprecated and will be removed in 3.1.0. Use `labelsArray` instead. Since 3.0.0.use load with SparkSession. Since 4.0.0.use saveImpl with SparkSession. Since 4.0.0.use onExecutorExcluded instead. Since 3.1.0.use onExecutorExcludedForStage instead. Since 3.1.0.use onExecutorUnexcluded instead. Since 3.1.0.use onNodeExcluded instead. Since 3.1.0.use onNodeExcludedForStage instead. Since 3.1.0.use onNodeUnexcluded instead. Since 3.1.0.Use
SparkThrowable.getCondition()
instead.use createTable instead. Since 2.2.0.use createTable instead. Since 2.2.0.use createTable instead. Since 2.2.0.use createTable instead. Since 2.2.0.use createTable instead. Since 2.2.0.use createTable instead. Since 2.2.0.Use json(Dataset[String]) instead. Since 2.2.0.Use json(Dataset[String]) instead. Since 2.2.0.use flatMap() or select() with functions.explode() instead. Since 2.0.0.use flatMap() or select() with functions.explode() instead. Since 2.0.0.Use createOrReplaceTempView(viewName) instead. Since 2.0.0.using toSqlType(..., useStableIdForUnionType: Boolean) instead. Since 4.0.0.This is deprecated. Please overrideStagingTableCatalog.stageCreate(Identifier, Column[], Transform[], Map)
instead.This is deprecated. Please overrideTable.columns()
instead.This is deprecated. Please overrideTableCatalog.createTable(Identifier, Column[], Transform[], Map)
instead.useWriteBuilder.build()
instead.useWriteBuilder.build()
instead.Use json(Dataset[String]) instead. Since 2.2.0.Use json(Dataset[String]) instead. Since 2.2.0.use flatMap() or select() with functions.explode() instead. Since 2.0.0.use flatMap() or select() with functions.explode() instead. Since 2.0.0.Use approx_count_distinct. Since 2.1.0.Use approx_count_distinct. Since 2.1.0.Use approx_count_distinct. Since 2.1.0.Use approx_count_distinct. Since 2.1.0.Use bitwise_not. Since 3.2.0.Use call_udf.Use monotonically_increasing_id(). Since 2.0.0.Use shiftleft. Since 3.2.0.Use shiftright. Since 3.2.0.Use shiftrightunsigned. Since 3.2.0.Use sum_distinct. Since 3.2.0.Use sum_distinct. Since 3.2.0.Use degrees. Since 2.1.0.Use degrees. Since 2.1.0.Use radians. Since 2.1.0.Use radians. Since 2.1.0.Scala `udf` method with return type parameter is deprecated. Please use Scala `udf` method without return type parameter. Since 3.0.0.Please override the classifyException method with an error class. Since 4.0.0.use org.apache.spark.sql.jdbc.JdbcDialect.compileExpression instead. Since 3.4.0.Please override renameTable method with identifiers. Since 3.5.0.Use createDataFrame instead. Since 1.3.0.Use createDataFrame instead. Since 1.3.0.Use createDataFrame instead. Since 1.3.0.Use createDataFrame instead. Since 1.3.0.Use SparkSession.clearActiveSession instead. Since 2.0.0.use sparkSession.catalog.createTable instead. Since 2.2.0.use sparkSession.catalog.createTable instead. Since 2.2.0.use sparkSession.catalog.createTable instead. Since 2.2.0.org.apache.spark.sql.SQLContext.createExternalTable(String, String, StructType, Map<String, String>) use sparkSession.catalog.createTable instead. Since 2.2.0.org.apache.spark.sql.SQLContext.createExternalTable(String, String, StructType, Map<String, String>) use sparkSession.catalog.createTable instead. Since 2.2.0.use sparkSession.catalog.createTable instead. Since 2.2.0.Use SparkSession.builder instead. Since 2.0.0.As of 1.4.0, replaced byread().jdbc()
.As of 1.4.0, replaced byread().jdbc()
.As of 1.4.0, replaced byread().jdbc()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().json()
.As of 1.4.0, replaced byread().load(path)
.As of 1.4.0, replaced byread().format(source).load(path)
.As of 1.4.0, replaced byread().format(source).options(options).load()
.As of 1.4.0, replaced byread().format(source).schema(schema).options(options).load()
.As of 1.4.0, replaced byread().format(source).schema(schema).options(options).load()
.As of 1.4.0, replaced byread().format(source).options(options).load()
.As of 1.4.0, replaced byread().parquet()
.Use read.parquet() instead. Since 1.4.0.Use SparkSession.setActiveSession instead. Since 2.0.0.This is deprecated as of Spark 3.4.0. UseTrigger.AvailableNow()
to leverage better guarantee of processing, fine-grained scale of batches, and better gradual processing of watermark advancement including no-data batch. See the NOTES inTrigger.AvailableNow()
for details.this method and the use of UserDefinedAggregateFunction are deprecated. Aggregator[IN, BUF, OUT] should now be registered as a UDF via the functions.udaf(agg) method.use isExcludedForStage instead. Since 3.1.0.use excludedInStages instead. Since 3.1.0.use isExcluded instead. Since 3.1.0.using synchronizers and monitors instead. Since 4.0.0.
-
ConstructorDescriptionUse SparkSession.builder instead. Since 2.0.0.Use SparkSession.builder instead. Since 2.0.0.