Error Conditions

This is a list of error states and conditions that may be returned by Spark SQL.

Error State / SQLSTATE Error Condition & Sub-Condition Message
07001 # ALL_PARAMETERS_MUST_BE_NAMED

Using name parameterized queries requires all parameters to be named. Parameters missing names: <exprs>.

07501 # INVALID_STATEMENT_FOR_EXECUTE_INTO

The INTO clause of EXECUTE IMMEDIATE is only valid for queries but the given statement is not a query: <sqlString>.

07501 # NESTED_EXECUTE_IMMEDIATE

Nested EXECUTE IMMEDIATE commands are not allowed. Please ensure that the SQL query provided (<sqlString>) does not contain another EXECUTE IMMEDIATE command.

0A000 # CANNOT_INVOKE_IN_TRANSFORMATIONS

Dataset transformations and actions can only be invoked by the driver, not inside of other Dataset transformations; for example, dataset1.map(x => dataset2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the dataset1.map transformation. For more information, see SPARK-28702.

0A000 # CANNOT_UPDATE_FIELD

Cannot update <table> field <fieldName> type:

# ARRAY_TYPE

Update the element by updating <fieldName>.element.

# INTERVAL_TYPE

Update an interval by updating its fields.

# MAP_TYPE

Update a map by updating <fieldName>.key or <fieldName>.value.

# STRUCT_TYPE

Update a struct by updating its fields.

# USER_DEFINED_TYPE

Update a UserDefinedType[<udtSql>] by updating its fields.

0A000 # CLASS_UNSUPPORTED_BY_MAP_OBJECTS

MapObjects does not support the class <cls> as resulting collection.

0A000 # COLUMN_ARRAY_ELEMENT_TYPE_MISMATCH

Some values in field <pos> are incompatible with the column array type. Expected type <type>.

0A000 # CONCURRENT_QUERY

Another instance of this query was just started by a concurrent session.

0A000 # CREATE_PERMANENT_VIEW_WITHOUT_ALIAS

Not allowed to create the permanent view <name> without explicitly assigning an alias for the expression <attr>.

0A000 # DISTINCT_WINDOW_FUNCTION_UNSUPPORTED

Distinct window functions are not supported: <windowExpr>.

0A000 # EMPTY_SCHEMA_NOT_SUPPORTED_FOR_DATASOURCE

The <format> datasource does not support writing empty or nested empty schemas. Please make sure the data schema has at least one or more column(s).

0A000 # INVALID_PANDAS_UDF_PLACEMENT

The group aggregate pandas UDF <functionList> cannot be invoked together with as other, non-pandas aggregate functions.

0A000 # INVALID_PARTITION_COLUMN_DATA_TYPE

Cannot use <type> for partition column.

0A000 # MULTI_UDF_INTERFACE_ERROR

Not allowed to implement multiple UDF interfaces, UDF class <className>.

0A000 # NAMED_PARAMETER_SUPPORT_DISABLED

Cannot call function <functionName> because named argument references are not enabled here. In this case, the named argument reference was <argument>. Set "spark.sql.allowNamedFunctionArguments" to "true" to turn on feature.

0A000 # NOT_SUPPORTED_CHANGE_COLUMN

ALTER TABLE ALTER/CHANGE COLUMN is not supported for changing <table>'s column <originName> with type <originType> to <newName> with type <newType>.

0A000 # NOT_SUPPORTED_COMMAND_FOR_V2_TABLE

<cmd> is not supported for v2 tables.

0A000 # NOT_SUPPORTED_COMMAND_WITHOUT_HIVE_SUPPORT

<cmd> is not supported, if you want to enable it, please set "spark.sql.catalogImplementation" to "hive".

0A000 # NOT_SUPPORTED_IN_JDBC_CATALOG

Not supported command in JDBC catalog:

# COMMAND

<cmd>

# COMMAND_WITH_PROPERTY

<cmd> with property <property>.

0A000 # PIPE_OPERATOR_AGGREGATE_EXPRESSION_CONTAINS_NO_AGGREGATE_FUNCTION

Non-grouping expression <expr> is provided as an argument to the |> AGGREGATE pipe operator but does not contain any aggregate function; please update it to include an aggregate function and then retry the query again.

0A000 # PIPE_OPERATOR_CONTAINS_AGGREGATE_FUNCTION

Aggregate function <expr> is not allowed when using the pipe operator |> <clause> clause; please use the pipe operator |> AGGREGATE clause instead.

0A000 # SCALAR_SUBQUERY_IS_IN_GROUP_BY_OR_AGGREGATE_FUNCTION

The correlated scalar subquery '<sqlExpr>' is neither present in GROUP BY, nor in an aggregate function. Add it to GROUP BY using ordinal position or wrap it in first() (or first_value) if you don't care which value you get.

0A000 # STAR_GROUP_BY_POS

Star (*) is not allowed in a select list when GROUP BY an ordinal position is used.

0A000 # UNSUPPORTED_ADD_FILE

Don't support add file.

# DIRECTORY

The file <path> is a directory, consider to set "spark.sql.legacy.addSingleFileInAddFile" to "false".

# LOCAL_DIRECTORY

The local directory <path> is not supported in a non-local master mode.

0A000 # UNSUPPORTED_ARROWTYPE

Unsupported arrow type <typeName>.

0A000 # UNSUPPORTED_CALL

Cannot call the method "<methodName>" of the class "<className>".

# FIELD_INDEX

The row shall have a schema to get an index of the field <fieldName>.

# WITHOUT_SUGGESTION
0A000 # UNSUPPORTED_CHAR_OR_VARCHAR_AS_STRING

The char/varchar type can't be used in the table schema. If you want Spark treat them as string type as same as Spark 3.0 and earlier, please set "spark.sql.legacy.charVarcharAsString" to "true".

0A000 # UNSUPPORTED_COLLATION

Collation <collationName> is not supported for:

# FOR_FUNCTION

function <functionName>. Please try to use a different collation.

0A000 # UNSUPPORTED_CONNECT_FEATURE

Feature is not supported in Spark Connect:

# DATASET_QUERY_EXECUTION

Access to the Dataset Query Execution. This is server side developer API.

# RDD

Resilient Distributed Datasets (RDDs).

# SESSION_BASE_RELATION_TO_DATAFRAME

Invoking SparkSession 'baseRelationToDataFrame'. This is server side developer API

# SESSION_EXECUTE_COMMAND

Invoking SparkSession 'executeCommand'.

# SESSION_EXPERIMENTAL_METHODS

Access to SparkSession Experimental (methods). This is server side developer API

# SESSION_LISTENER_MANAGER

Access to the SparkSession Listener Manager. This is server side developer API

# SESSION_SESSION_STATE

Access to the SparkSession Session State. This is server side developer API.

# SESSION_SHARED_STATE

Access to the SparkSession Shared State. This is server side developer API.

# SESSION_SPARK_CONTEXT

Access to the SparkContext.

0A000 # UNSUPPORTED_DATASOURCE_FOR_DIRECT_QUERY

Unsupported data source type for direct query on files: <dataSourceType>

0A000 # UNSUPPORTED_DATATYPE

Unsupported data type <typeName>.

0A000 # UNSUPPORTED_DATA_SOURCE_SAVE_MODE

The data source "<source>" cannot be written in the <createMode> mode. Please use either the "Append" or "Overwrite" mode instead.

0A000 # UNSUPPORTED_DATA_TYPE_FOR_DATASOURCE

The <format> datasource doesn't support the column <columnName> of the type <columnType>.

0A000 # UNSUPPORTED_DATA_TYPE_FOR_ENCODER

Cannot create encoder for <dataType>. Please use a different output data type for your UDF or DataFrame.

0A000 # UNSUPPORTED_DEFAULT_VALUE

DEFAULT column values is not supported.

# WITHOUT_SUGGESTION
# WITH_SUGGESTION

Enable it by setting "spark.sql.defaultColumn.enabled" to "true".

0A000 # UNSUPPORTED_DESERIALIZER

The deserializer is not supported:

# DATA_TYPE_MISMATCH

need a(n) <desiredType> field but got <dataType>.

# FIELD_NUMBER_MISMATCH

try to map <schema> to Tuple<ordinal>, but failed as the number of fields does not line up.

0A000 # UNSUPPORTED_FEATURE

The feature is not supported:

# AES_MODE

AES-<mode> with the padding <padding> by the <functionName> function.

# AES_MODE_AAD

<functionName> with AES-<mode> does not support additional authenticate data (AAD).

# AES_MODE_IV

<functionName> with AES-<mode> does not support initialization vectors (IVs).

# ALTER_TABLE_SERDE_FOR_DATASOURCE_TABLE

ALTER TABLE SET SERDE is not supported for table <tableName> created with the datasource API. Consider using an external Hive table or updating the table properties with compatible options for your table format.

# ANALYZE_UNCACHED_TEMP_VIEW

The ANALYZE TABLE FOR COLUMNS command can operate on temporary views that have been cached already. Consider to cache the view <viewName>.

# ANALYZE_UNSUPPORTED_COLUMN_TYPE

The ANALYZE TABLE FOR COLUMNS command does not support the type <columnType> of the column <columnName> in the table <tableName>.

# ANALYZE_VIEW

The ANALYZE TABLE command does not support views.

# CATALOG_OPERATION

Catalog <catalogName> does not support <operation>.

# CLAUSE_WITH_PIPE_OPERATORS

The SQL pipe operator syntax using |> does not support <clauses>.

# COLLATIONS_IN_MAP_KEYS

Collated strings for keys of maps

# COMBINATION_QUERY_RESULT_CLAUSES

Combination of ORDER BY/SORT BY/DISTRIBUTE BY/CLUSTER BY.

# COMMENT_NAMESPACE

Attach a comment to the namespace <namespace>.

# DESC_TABLE_COLUMN_PARTITION

DESC TABLE COLUMN for a specific partition.

# DROP_DATABASE

Drop the default database <database>.

# DROP_NAMESPACE

Drop the namespace <namespace>.

# HIVE_TABLE_TYPE

The <tableName> is hive <tableType>.

# HIVE_WITH_ANSI_INTERVALS

Hive table <tableName> with ANSI intervals.

# INSERT_PARTITION_SPEC_IF_NOT_EXISTS

INSERT INTO <tableName> with IF NOT EXISTS in the PARTITION spec.

# LAMBDA_FUNCTION_WITH_PYTHON_UDF

Lambda function with Python UDF <funcName> in a higher order function.

# LATERAL_COLUMN_ALIAS_IN_AGGREGATE_FUNC

Referencing a lateral column alias <lca> in the aggregate function <aggFunc>.

# LATERAL_COLUMN_ALIAS_IN_AGGREGATE_WITH_WINDOW_AND_HAVING

Referencing lateral column alias <lca> in the aggregate query both with window expressions and with having clause. Please rewrite the aggregate query by removing the having clause or removing lateral alias reference in the SELECT list.

# LATERAL_COLUMN_ALIAS_IN_GENERATOR

Referencing a lateral column alias <lca> in generator expression <generatorExpr>.

# LATERAL_COLUMN_ALIAS_IN_GROUP_BY

Referencing a lateral column alias via GROUP BY alias/ALL is not supported yet.

# LATERAL_COLUMN_ALIAS_IN_WINDOW

Referencing a lateral column alias <lca> in window expression <windowExpr>.

# LATERAL_JOIN_USING

JOIN USING with LATERAL correlation.

# LITERAL_TYPE

Literal for '<value>' of <type>.

# MULTIPLE_BUCKET_TRANSFORMS

Multiple bucket TRANSFORMs.

# MULTI_ACTION_ALTER

The target JDBC server hosting table <tableName> does not support ALTER TABLE with multiple actions. Split the ALTER TABLE up into individual actions to avoid this error.

# ORC_TYPE_CAST

Unable to convert <orcType> of Orc to data type <toType>.

# OVERWRITE_BY_SUBQUERY

INSERT OVERWRITE with a subquery condition.

# PANDAS_UDAF_IN_PIVOT

Pandas user defined aggregate function in the PIVOT clause.

# PARAMETER_MARKER_IN_UNEXPECTED_STATEMENT

Parameter markers are not allowed in <statement>.

# PARTITION_BY_VARIANT

Cannot use VARIANT producing expressions to partition a DataFrame, but the type of expression <expr> is <dataType>.

# PARTITION_WITH_NESTED_COLUMN_IS_UNSUPPORTED

Invalid partitioning: <cols> is missing or is in a map or array.

# PIPE_OPERATOR_AGGREGATE_UNSUPPORTED_CASE

The SQL pipe operator syntax with aggregation (using |> AGGREGATE) does not support <case>.

# PIVOT_AFTER_GROUP_BY

PIVOT clause following a GROUP BY clause. Consider pushing the GROUP BY into a subquery.

# PIVOT_TYPE

Pivoting by the value '<value>' of the column data type <type>.

# PURGE_PARTITION

Partition purge.

# PURGE_TABLE

Purge table.

# PYTHON_UDF_IN_ON_CLAUSE

Python UDF in the ON clause of a <joinType> JOIN. In case of an INNER JOIN consider rewriting to a CROSS JOIN with a WHERE clause.

# QUERY_ONLY_CORRUPT_RECORD_COLUMN

Queries from raw JSON/CSV/XML files are disallowed when the referenced columns only include the internal corrupt record column (named _corrupt_record by default). For example: spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count() and spark.read.schema(schema).json(file).select("_corrupt_record").show(). Instead, you can cache or save the parsed results and then send the same query. For example, val df = spark.read.schema(schema).json(file).cache() and then df.filter($"_corrupt_record".isNotNull).count().

# REMOVE_NAMESPACE_COMMENT

Remove a comment from the namespace <namespace>.

# REPLACE_NESTED_COLUMN

The replace function does not support nested column <colName>.

# SET_NAMESPACE_PROPERTY

<property> is a reserved namespace property, <msg>.

# SET_OPERATION_ON_MAP_TYPE

Cannot have MAP type columns in DataFrame which calls set operations (INTERSECT, EXCEPT, etc.), but the type of column <colName> is <dataType>.

# SET_OPERATION_ON_VARIANT_TYPE

Cannot have VARIANT type columns in DataFrame which calls set operations (INTERSECT, EXCEPT, etc.), but the type of column <colName> is <dataType>.

# SET_PROPERTIES_AND_DBPROPERTIES

set PROPERTIES and DBPROPERTIES at the same time.

# SET_TABLE_PROPERTY

<property> is a reserved table property, <msg>.

# SET_VARIABLE_USING_SET

<variableName> is a VARIABLE and cannot be updated using the SET statement. Use SET VARIABLE <variableName> = ... instead.

# SQL_SCRIPTING

SQL Scripting is under development and not all features are supported. SQL Scripting enables users to write procedural SQL including control flow and error handling. To enable existing features set <sqlScriptingEnabled> to true.

# SQL_SCRIPTING_WITH_POSITIONAL_PARAMETERS

Positional parameters are not supported with SQL Scripting.

# STATE_STORE_MULTIPLE_COLUMN_FAMILIES

Creating multiple column families with <stateStoreProvider> is not supported.

# STATE_STORE_REMOVING_COLUMN_FAMILIES

Removing column families with <stateStoreProvider> is not supported.

# STATE_STORE_TTL

State TTL with <stateStoreProvider> is not supported. Please use RocksDBStateStoreProvider.

# TABLE_OPERATION

Table <tableName> does not support <operation>. Please check the current catalog and namespace to make sure the qualified table name is expected, and also check the catalog implementation which is configured by "spark.sql.catalog".

# TEMPORARY_VIEW_WITH_SCHEMA_BINDING_MODE

Temporary views cannot be created with the WITH SCHEMA clause. Recreate the temporary view when the underlying schema changes, or use a persisted view.

# TIME_TRAVEL

Time travel on the relation: <relationId>.

# TOO_MANY_TYPE_ARGUMENTS_FOR_UDF_CLASS

UDF class with <num> type arguments.

# TRANSFORM_DISTINCT_ALL

TRANSFORM with the DISTINCT/ALL clause.

# TRANSFORM_NON_HIVE

TRANSFORM with SERDE is only supported in hive mode.

# TRIM_COLLATION

TRIM specifier in the collation.

# UPDATE_COLUMN_NULLABILITY

Update column nullability for MySQL and MS SQL Server.

# WRITE_FOR_BINARY_SOURCE

Write for the binary file data source.

0A000 # UNSUPPORTED_JOIN_TYPE

Unsupported join type '<typ>'. Supported join types include: <supported>.

0A000 # UNSUPPORTED_PARTITION_TRANSFORM

Unsupported partition transform: <transform>. The supported transforms are identity, bucket, and clusterBy. Ensure your transform expression uses one of these.

0A000 # UNSUPPORTED_SAVE_MODE

The save mode <saveMode> is not supported for:

# EXISTENT_PATH

an existent path.

# NON_EXISTENT_PATH

a non-existent path.

0A000 # UNSUPPORTED_SHOW_CREATE_TABLE

Unsupported a SHOW CREATE TABLE command.

# ON_DATA_SOURCE_TABLE_WITH_AS_SERDE

The table <tableName> is a Spark data source table. Please use SHOW CREATE TABLE without AS SERDE instead.

# ON_TEMPORARY_VIEW

The command is not supported on a temporary view <tableName>.

# ON_TRANSACTIONAL_HIVE_TABLE

Failed to execute the command against transactional Hive table <tableName>. Please use SHOW CREATE TABLE <tableName> AS SERDE to show Hive DDL instead.

# WITH_UNSUPPORTED_FEATURE

Failed to execute the command against table/view <tableName> which is created by Hive and uses the following unsupported features <unsupportedFeatures>

# WITH_UNSUPPORTED_SERDE_CONFIGURATION

Failed to execute the command against the table <tableName> which is created by Hive and uses the following unsupported serde configuration <configs> Please use SHOW CREATE TABLE <tableName> AS SERDE to show Hive DDL instead.

0A000 # UNSUPPORTED_SINGLE_PASS_ANALYZER_FEATURE

The single-pass analyzer cannot process this query or command because it does not yet support <feature>.

0A000 # UNSUPPORTED_STREAMING_OPERATOR_WITHOUT_WATERMARK

<outputMode> output mode not supported for <statefulOperator> on streaming DataFrames/DataSets without watermark.

0A000 # UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY

Unsupported subquery expression:

# ACCESSING_OUTER_QUERY_COLUMN_IS_NOT_ALLOWED

Accessing outer query column is not allowed in this location: <treeNode>

# AGGREGATE_FUNCTION_MIXED_OUTER_LOCAL_REFERENCES

Found an aggregate function in a correlated predicate that has both outer and local references, which is not supported: <function>.

# CORRELATED_COLUMN_IS_NOT_ALLOWED_IN_PREDICATE

Correlated column is not allowed in predicate: <treeNode>

# CORRELATED_COLUMN_NOT_FOUND

A correlated outer name reference within a subquery expression body was not found in the enclosing query: <value>.

# CORRELATED_REFERENCE

Expressions referencing the outer query are not supported outside of WHERE/HAVING clauses: <sqlExprs>.

# HIGHER_ORDER_FUNCTION

Subquery expressions are not supported within higher-order functions. Please remove all subquery expressions from higher-order functions and then try the query again.

# LATERAL_JOIN_CONDITION_NON_DETERMINISTIC

Lateral join condition cannot be non-deterministic: <condition>.

# MUST_AGGREGATE_CORRELATED_SCALAR_SUBQUERY

Correlated scalar subqueries must be aggregated to return at most one row.

# NON_CORRELATED_COLUMNS_IN_GROUP_BY

A GROUP BY clause in a scalar correlated subquery cannot contain non-correlated columns: <value>.

# NON_DETERMINISTIC_LATERAL_SUBQUERIES

Non-deterministic lateral subqueries are not supported when joining with outer relations that produce more than one row: <treeNode>

# SCALAR_SUBQUERY_IN_VALUES

Scalar subqueries in the VALUES clause.

# UNSUPPORTED_CORRELATED_EXPRESSION_IN_JOIN_CONDITION

Correlated subqueries in the join predicate cannot reference both join inputs: <subqueryExpression>

# UNSUPPORTED_CORRELATED_REFERENCE_DATA_TYPE

Correlated column reference '<expr>' cannot be <dataType> type.

# UNSUPPORTED_CORRELATED_SCALAR_SUBQUERY

Correlated scalar subqueries can only be used in filters, aggregations, projections, and UPDATE/MERGE/DELETE commands: <treeNode>

# UNSUPPORTED_IN_EXISTS_SUBQUERY

IN/EXISTS predicate subqueries can only be used in filters, joins, aggregations, window functions, projections, and UPDATE/MERGE/DELETE commands: <treeNode>

# UNSUPPORTED_TABLE_ARGUMENT

Table arguments are used in a function where they are not supported: <treeNode>

0A000 # UNSUPPORTED_TYPED_LITERAL

Literals of the type <unsupportedType> are not supported. Supported types are <supportedTypes>.

0AKD0 # CANNOT_RENAME_ACROSS_SCHEMA

Renaming a <type> across schemas is not allowed.

21000 # BOOLEAN_STATEMENT_WITH_EMPTY_ROW

Boolean statement <invalidStatement> is invalid. Expected single row with a value of the BOOLEAN type, but got an empty row.

21000 # ROW_SUBQUERY_TOO_MANY_ROWS

More than one row returned by a subquery used as a row.

21000 # SCALAR_SUBQUERY_TOO_MANY_ROWS

More than one row returned by a subquery used as an expression.

21S01 # CREATE_VIEW_COLUMN_ARITY_MISMATCH

Cannot create view <viewName>, the reason is

# NOT_ENOUGH_DATA_COLUMNS

not enough data columns: View columns: <viewColumns>. Data columns: <dataColumns>.

# TOO_MANY_DATA_COLUMNS

too many data columns: View columns: <viewColumns>. Data columns: <dataColumns>.

21S01 # INSERT_COLUMN_ARITY_MISMATCH

Cannot write to <tableName>, the reason is

# NOT_ENOUGH_DATA_COLUMNS

not enough data columns: Table columns: <tableColumns>. Data columns: <dataColumns>.

# TOO_MANY_DATA_COLUMNS

too many data columns: Table columns: <tableColumns>. Data columns: <dataColumns>.

21S01 # INSERT_PARTITION_COLUMN_ARITY_MISMATCH

Cannot write to '<tableName>', <reason>: Table columns: <tableColumns>. Partition columns with static values: <staticPartCols>. Data columns: <dataColumns>.

22000 # HLL_UNION_DIFFERENT_LG_K

Sketches have different lgConfigK values: <left> and <right>. Set the allowDifferentLgConfigK parameter to true to call <function> with different lgConfigK values.

22000 # MALFORMED_CHARACTER_CODING

Invalid value found when performing <function> with <charset>

22003 # ARITHMETIC_OVERFLOW

<message>.<alternative> If necessary set <config> to "false" to bypass this error.

22003 # BINARY_ARITHMETIC_OVERFLOW

<value1> <symbol> <value2> caused overflow. Use <functionName> to ignore overflow problem and return NULL.

22003 # CAST_OVERFLOW

The value <value> of the type <sourceType> cannot be cast to <targetType> due to an overflow. Use try_cast to tolerate overflow and return NULL instead.

22003 # CAST_OVERFLOW_IN_TABLE_INSERT

Fail to assign a value of <sourceType> type to the <targetType> type column or variable <columnName> due to an overflow. Use try_cast on the input value to tolerate overflow and return NULL instead.

22003 # COLUMN_ORDINAL_OUT_OF_BOUNDS

Column ordinal out of bounds. The number of columns in the table is <attributesLength>, but the column ordinal is <ordinal>. Attributes are the following: <attributes>.

22003 # DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION

Decimal precision <precision> exceeds max precision <maxPrecision>.

22003 # INCORRECT_RAMP_UP_RATE

Max offset with <rowsPerSecond> rowsPerSecond is <maxSeconds>, but 'rampUpTimeSeconds' is <rampUpTimeSeconds>.

22003 # INVALID_ARRAY_INDEX

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use the SQL function get() to tolerate accessing element at invalid index and return NULL instead.

22003 # INVALID_ARRAY_INDEX_IN_ELEMENT_AT

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use try_element_at to tolerate accessing element at invalid index and return NULL instead.

22003 # INVALID_BITMAP_POSITION

The 0-indexed bitmap position <bitPosition> is out of bounds. The bitmap has <bitmapNumBits> bits (<bitmapNumBytes> bytes).

22003 # INVALID_BOUNDARY

The boundary <boundary> is invalid: <invalidValue>.

# END

Expected the value is '0', '<longMaxValue>', '[<intMinValue>, <intMaxValue>]'.

# START

Expected the value is '0', '<longMinValue>', '[<intMinValue>, <intMaxValue>]'.

22003 # INVALID_INDEX_OF_ZERO

The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).

22003 # INVALID_NUMERIC_LITERAL_RANGE

Numeric literal <rawStrippedQualifier> is outside the valid range for <typeName> with minimum value of <minValue> and maximum value of <maxValue>. Please adjust the value accordingly.

22003 # NEGATIVE_VALUES_IN_FREQUENCY_EXPRESSION

Found the negative value in <frequencyExpression>: <negativeValue>, but expected a positive integral value.

22003 # NUMERIC_OUT_OF_SUPPORTED_RANGE

The value <value> cannot be interpreted as a numeric since it has more than 38 digits.

22003 # NUMERIC_VALUE_OUT_OF_RANGE
# WITHOUT_SUGGESTION

The <roundedValue> rounded half up from <originalValue> cannot be represented as Decimal(<precision>, <scale>).

# WITH_SUGGESTION

<value> cannot be represented as Decimal(<precision>, <scale>). If necessary set <config> to "false" to bypass this error, and return NULL instead.

22003 # SUM_OF_LIMIT_AND_OFFSET_EXCEEDS_MAX_INT

The sum of the LIMIT clause and the OFFSET clause must not be greater than the maximum 32-bit integer value (2,147,483,647) but found limit = <limit>, offset = <offset>.

22004 # COMPARATOR_RETURNS_NULL

The comparator has returned a NULL for a comparison between <firstValue> and <secondValue>. It should return a positive integer for "greater than", 0 for "equal" and a negative integer for "less than". To revert to deprecated behavior where NULL is treated as 0 (equal), you must set "spark.sql.legacy.allowNullComparisonResultInArraySort" to "true".

22004 # NULL_QUERY_STRING_EXECUTE_IMMEDIATE

Execute immediate requires a non-null variable as the query string, but the provided variable <varName> is null.

22004 # TUPLE_IS_EMPTY

Due to Scala's limited support of tuple, empty tuple is not supported.

22006 # CANNOT_PARSE_INTERVAL

Unable to parse <intervalString>. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format. If the issue persists, please double check that the input value is not null or empty and try again.

22006 # INVALID_INTERVAL_FORMAT

Error parsing '<input>' to interval. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format.

# ARITHMETIC_EXCEPTION

Uncaught arithmetic exception while parsing '<input>'.

# DAY_TIME_PARSING

Error parsing interval day-time string: <msg>.

# INPUT_IS_EMPTY

Interval string cannot be empty.

# INPUT_IS_NULL

Interval string cannot be null.

# INTERVAL_PARSING

Error parsing interval <interval> string.

# INVALID_FRACTION

<unit> cannot have fractional part.

# INVALID_PRECISION

Interval can only support nanosecond precision, <value> is out of range.

# INVALID_PREFIX

Invalid interval prefix <prefix>.

# INVALID_UNIT

Invalid unit <unit>.

# INVALID_VALUE

Invalid value <value>.

# MISSING_NUMBER

Expect a number after <word> but hit EOL.

# MISSING_UNIT

Expect a unit name after <word> but hit EOL.

# SECOND_NANO_FORMAT

Interval string does not match second-nano format of ss.nnnnnnnnn.

# TIMEZONE_INTERVAL_OUT_OF_RANGE

The interval value must be in the range of [-18, +18] hours with second precision.

# UNKNOWN_PARSING_ERROR

Unknown error when parsing <word>.

# UNMATCHED_FORMAT_STRING

Interval string does not match <intervalStr> format of <supportedFormat> when cast to <typeName>: <input>.

# UNMATCHED_FORMAT_STRING_WITH_NOTICE

Interval string does not match <intervalStr> format of <supportedFormat> when cast to <typeName>: <input>. Set "spark.sql.legacy.fromDayTimeString.enabled" to "true" to restore the behavior before Spark 3.0.

# UNRECOGNIZED_NUMBER

Unrecognized number <number>.

# UNSUPPORTED_FROM_TO_EXPRESSION

Cannot support (interval '<input>' <from> to <to>) expression.

22006 # INVALID_INTERVAL_WITH_MICROSECONDS_ADDITION

Cannot add an interval to a date because its microseconds part is not 0. If necessary set <ansiConfig> to "false" to bypass this error.

22007 # CANNOT_PARSE_TIMESTAMP

<message>. Use try_to_timestamp to tolerate invalid input string and return NULL instead.

22007 # INVALID_DATETIME_PATTERN

Unrecognized datetime pattern: <pattern>.

# ILLEGAL_CHARACTER

Illegal pattern character found in datetime pattern: <c>. Please provide legal character.

# LENGTH

Too many letters in datetime pattern: <pattern>. Please reduce pattern length.

# SECONDS_FRACTION

Cannot detect a seconds fraction pattern of variable length. Please make sure the pattern contains 'S', and does not contain illegal characters.

22008 # DATETIME_OVERFLOW

Datetime operation overflow: <operation>.

22009 # ILLEGAL_DAY_OF_WEEK

Illegal input for day of week: <string>.

22009 # INVALID_TIMEZONE

The timezone: <timeZone> is invalid. The timezone must be either a region-based zone ID or a zone offset. Region IDs must have the form 'area/city', such as 'America/Los_Angeles'. Zone offsets must be in the format '(+|-)HH', '(+|-)HH:mm’ or '(+|-)HH:mm:ss', e.g '-08' , '+01:00' or '-13:33:33', and must be in the range from -18:00 to +18:00. 'Z' and 'UTC' are accepted as synonyms for '+00:00'.

2200E # NULL_MAP_KEY

Cannot use null as map key.

22012 # DIVIDE_BY_ZERO

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead. If necessary set <config> to "false" to bypass this error.

22012 # INTERVAL_DIVIDED_BY_ZERO

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead.

22015 # INTERVAL_ARITHMETIC_OVERFLOW

Integer overflow while operating with intervals.

# WITHOUT_SUGGESTION

Try devising appropriate values for the interval parameters.

# WITH_SUGGESTION

Use <functionName> to tolerate overflow and return NULL instead.

22018 # CANNOT_PARSE_DECIMAL

Cannot parse decimal. Please ensure that the input is a valid number with optional decimal point or comma separators.

22018 # CANNOT_PARSE_PROTOBUF_DESCRIPTOR

Error parsing descriptor bytes into Protobuf FileDescriptorSet.

22018 # CAST_INVALID_INPUT

The value <expression> of the type <sourceType> cannot be cast to <targetType> because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast to tolerate malformed input and return NULL instead.

22018 # CONVERSION_INVALID_INPUT

The value <str> (<fmt>) cannot be converted to <targetType> because it is malformed. Correct the value as per the syntax, or change its format. Use <suggestion> to tolerate malformed input and return NULL instead.

22018 # FAILED_PARSE_STRUCT_TYPE

Failed parsing struct: <raw>.

2201E # STRUCT_ARRAY_LENGTH_MISMATCH

Input row doesn't have expected number of values required by the schema. <expected> fields are required while <actual> values are provided.

22022 # INVALID_CONF_VALUE

The value '<confValue>' in the config "<confName>" is invalid.

# DEFAULT_COLLATION

Cannot resolve the given default collation. Suggested valid collation names: ['<proposals>']?

# TIME_ZONE

Cannot resolve the given timezone.

22023 # DATETIME_FIELD_OUT_OF_BOUNDS

<rangeMessage>. If necessary set <ansiConfig> to "false" to bypass this error.

22023 # INVALID_FRACTION_OF_SECOND

Valid range for seconds is [0, 60] (inclusive), but the provided value is <secAndMicros>. To avoid this error, use try_make_timestamp, which returns NULL on error. If you do not want to use the session default timestamp version of this function, use try_make_timestamp_ntz or try_make_timestamp_ltz.

22023 # INVALID_JSON_RECORD_TYPE

Detected an invalid type of a JSON record while inferring a common schema in the mode <failFastMode>. Expected a STRUCT type, but found <invalidType>.

22023 # INVALID_PARAMETER_VALUE

The value of parameter(s) <parameter> in <functionName> is invalid:

# AES_CRYPTO_ERROR

detail message: <detailMessage>

# AES_IV_LENGTH

supports 16-byte CBC IVs and 12-byte GCM IVs, but got <actualLength> bytes for <mode>.

# AES_KEY_LENGTH

expects a binary value with 16, 24 or 32 bytes, but got <actualLength> bytes.

# BINARY_FORMAT

expects one of binary formats 'base64', 'hex', 'utf-8', but got <invalidFormat>.

# BIT_POSITION_RANGE

expects an integer value in [0, <upper>), but got <invalidValue>.

# BOOLEAN

expects a boolean literal, but got <invalidValue>.

# CHARSET

expects one of the <charsets>, but got <charset>.

# DATETIME_UNIT

expects one of the units without quotes YEAR, QUARTER, MONTH, WEEK, DAY, DAYOFYEAR, HOUR, MINUTE, SECOND, MILLISECOND, MICROSECOND, but got the string literal <invalidValue>.

# DOUBLE

expects an double literal, but got <invalidValue>.

# DTYPE

Unsupported dtype: <invalidValue>. Valid values: float64, float32.

# INTEGER

expects an integer literal, but got <invalidValue>.

# LENGTH

Expects length greater than or equal to 0, but got <length>.

# LONG

expects a long literal, but got <invalidValue>.

# NULL

expects a non-NULL value.

# PATTERN

<value>.

# REGEX_GROUP_INDEX

Expects group index between 0 and <groupCount>, but got <groupIndex>.

# START

Expects a positive or a negative value for start, but got 0.

# STRING

expects a string literal, but got <invalidValue>.

# ZERO_INDEX

expects %1$, %2$ and so on, but got %0$.

22023 # INVALID_REGEXP_REPLACE

Could not perform regexp_replace for source = "<source>", pattern = "<pattern>", replacement = "<replacement>" and position = <position>.

22023 # INVALID_VARIANT_CAST

The variant value <value> cannot be cast into <dataType>. Please use try_variant_get instead.

22023 # INVALID_VARIANT_FROM_PARQUET

Invalid variant.

# MISSING_FIELD

Missing <field> field.

# NULLABLE_OR_NOT_BINARY_FIELD

The <field> must be a non-nullable binary.

# WRONG_NUM_FIELDS

Variant column must contain exactly two fields.

22023 # INVALID_VARIANT_GET_PATH

The path <path> is not a valid variant extraction path in <functionName>. A valid path should start with $ and is followed by zero or more segments like [123], .name, ['name'], or ["name"].

22023 # INVALID_VARIANT_SHREDDING_SCHEMA

The schema <schema> is not a valid variant shredding schema.

22023 # MALFORMED_RECORD_IN_PARSING

Malformed records are detected in record parsing: <badRecord>. Parse Mode: <failFastMode>. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.

# CANNOT_PARSE_JSON_ARRAYS_AS_STRUCTS

Parsing JSON arrays as structs is forbidden.

# CANNOT_PARSE_STRING_AS_DATATYPE

Cannot parse the value <fieldValue> of the field <fieldName> as target spark data type <targetType> from the input type <inputType>.

# WITHOUT_SUGGESTION
22023 # MALFORMED_VARIANT

Variant binary is malformed. Please check the data source is valid.

22023 # ROW_VALUE_IS_NULL

Found NULL in a row at the index <index>, expected a non-NULL value.

22023 # RULE_ID_NOT_FOUND

Not found an id for the rule name "<ruleName>". Please modify RuleIdCollection.scala if you are adding a new rule.

22023 # SECOND_FUNCTION_ARGUMENT_NOT_INTEGER

The second argument of <functionName> function needs to be an integer.

22023 # TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INCOMPATIBLE_WITH_CALL

Failed to evaluate the table function <functionName> because its table metadata <requestedMetadata>, but the function call <invalidFunctionCallProperty>.

22023 # TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INVALID

Failed to evaluate the table function <functionName> because its table metadata was invalid; <reason>.

22023 # UNKNOWN_PRIMITIVE_TYPE_IN_VARIANT

Unknown primitive type with id <id> was found in a variant value.

22023 # VARIANT_CONSTRUCTOR_SIZE_LIMIT

Cannot construct a Variant larger than 16 MiB. The maximum allowed size of a Variant value is 16 MiB.

22023 # VARIANT_DUPLICATE_KEY

Failed to build variant because of a duplicate object key <key>.

22023 # VARIANT_SIZE_LIMIT

Cannot build variant bigger than <sizeLimit> in <functionName>. Please avoid large input strings to this expression (for example, add function calls(s) to check the expression size and convert it to NULL first if it is too big).

22024 # NULL_DATA_SOURCE_OPTION

Data source read/write option <option> cannot have null value.

22029 # INVALID_UTF8_STRING

Invalid UTF8 byte sequence found in string: <str>.

22032 # INVALID_JSON_ROOT_FIELD

Cannot convert JSON root field to target Spark type.

22032 # INVALID_JSON_SCHEMA_MAP_TYPE

Input schema <jsonSchema> can only contain STRING as a key type for a MAP.

2203G # CANNOT_PARSE_JSON_FIELD

Cannot parse the field name <fieldName> and the value <fieldValue> of the JSON token type <jsonType> to target Spark data type <dataType>.

2203G # FAILED_ROW_TO_JSON

Failed to convert the row value <value> of the class <class> to the target SQL type <sqlType> in the JSON format.

2203G # INVALID_JSON_DATA_TYPE

Failed to convert the JSON string '<invalidType>' to a data type. Please enter a valid data type.

2203G # INVALID_JSON_DATA_TYPE_FOR_COLLATIONS

Collations can only be applied to string types, but the JSON data type is <jsonType>.

22546 # CANNOT_DECODE_URL

The provided URL cannot be decoded: <url>. Please ensure that the URL is properly formatted and try again.

22546 # HLL_INVALID_INPUT_SKETCH_BUFFER

Invalid call to <function>; only valid HLL sketch buffers are supported as inputs (such as those produced by the hll_sketch_agg function).

22546 # HLL_INVALID_LG_K

Invalid call to <function>; the lgConfigK value must be between <min> and <max>, inclusive: <value>.

22546 # INVALID_BOOLEAN_STATEMENT

Boolean statement is expected in the condition, but <invalidStatement> was found.

22KD3 # AVRO_INCOMPATIBLE_READ_TYPE

Cannot convert Avro <avroPath> to SQL <sqlPath> because the original encoded data type is <avroType>, however you're trying to read the field as <sqlType>, which would lead to an incorrect answer. To allow reading this field, enable the SQL configuration: "spark.sql.legacy.avro.allowIncompatibleSchema".

22KD3 # AVRO_NOT_LOADED_SQL_FUNCTIONS_UNUSABLE

Cannot call the <functionName> SQL function because the Avro data source is not loaded. Please restart your job or session with the 'spark-avro' package loaded, such as by using the --packages argument on the command line, and then retry your query or command again.

22KD3 # CANNOT_USE_KRYO

Cannot load Kryo serialization codec. Kryo serialization cannot be used in the Spark Connect client. Use Java serialization, provide a custom Codec, or use Spark Classic instead.

22KD3 # PROTOBUF_NOT_LOADED_SQL_FUNCTIONS_UNUSABLE

Cannot call the <functionName> SQL function because the Protobuf data source is not loaded. Please restart your job or session with the 'spark-protobuf' package loaded, such as by using the --packages argument on the command line, and then retry your query or command again.

22P02 # INVALID_URL

The url is invalid: <url>. Use try_parse_url to tolerate invalid URL and return NULL instead.

22P03 # INVALID_BYTE_STRING

The expected format is ByteString, but was <unsupported> (<class>).

23505 # DUPLICATED_MAP_KEY

Duplicate map key <key> was found, please check the input data. If you want to remove the duplicated keys, you can set <mapKeyDedupPolicy> to "LAST_WIN" so that the key inserted at last takes precedence.

23505 # DUPLICATE_KEY

Found duplicate keys <keyColumn>.

23K01 # MERGE_CARDINALITY_VIOLATION

The ON search condition of the MERGE statement matched a single row from the target table with multiple rows of the source table. This could result in the target row being operated on more than once with an update or delete operation and is not allowed.

2BP01 # SCHEMA_NOT_EMPTY

Cannot drop a schema <schemaName> because it contains objects. Use DROP SCHEMA ... CASCADE to drop the schema and all its objects.

38000 # CLASS_NOT_OVERRIDE_EXPECTED_METHOD

<className> must override either <method1> or <method2>.

38000 # FAILED_FUNCTION_CALL

Failed preparing of the function <funcName> for call. Please, double check function's arguments.

38000 # FAILED_TO_LOAD_ROUTINE

Failed to load routine <routineName>.

38000 # INVALID_UDF_IMPLEMENTATION

Function <funcName> does not implement a ScalarFunction or AggregateFunction.

38000 # NO_UDF_INTERFACE

UDF class <className> doesn't implement any UDF interface.

38000 # PYTHON_DATA_SOURCE_ERROR

Failed to <action> Python data source <type>: <msg>

38000 # PYTHON_STREAMING_DATA_SOURCE_RUNTIME_ERROR

Failed when Python streaming data source perform <action>: <msg>

38000 # TABLE_VALUED_FUNCTION_FAILED_TO_ANALYZE_IN_PYTHON

Failed to analyze the Python user defined table function: <msg>

39000 # FAILED_EXECUTE_UDF

User defined function (<functionName>: (<signature>) => <result>) failed due to: <reason>.

39000 # FLATMAPGROUPSWITHSTATE_USER_FUNCTION_ERROR

An error occurred in the user provided function in flatMapGroupsWithState. Reason: <reason>

39000 # FOREACH_BATCH_USER_FUNCTION_ERROR

An error occurred in the user provided function in foreach batch sink. Reason: <reason>

39000 # FOREACH_USER_FUNCTION_ERROR

An error occurred in the user provided function in foreach sink. Reason: <reason>

3F000 # MISSING_DATABASE_FOR_V1_SESSION_CATALOG

Database name is not specified in the v1 session catalog. Please ensure to provide a valid database name when interacting with the v1 catalog.

40000 # CONCURRENT_STREAM_LOG_UPDATE

Concurrent update to the log. Multiple streaming jobs detected for <batchId>. Please make sure only one streaming job runs on a specific checkpoint location at a time.

42000 # AMBIGUOUS_REFERENCE_TO_FIELDS

Ambiguous reference to the field <field>. It appears <count> times in the schema.

42000 # CANNOT_REMOVE_RESERVED_PROPERTY

Cannot remove reserved property: <property>.

42000 # CLUSTERING_NOT_SUPPORTED

'<operation>' does not support clustering.

42000 # INVALID_COLUMN_OR_FIELD_DATA_TYPE

Column or field <name> is of type <type> while it's required to be <expectedType>.

42000 # INVALID_EXTRACT_BASE_FIELD_TYPE

Can't extract a value from <base>. Need a complex type [STRUCT, ARRAY, MAP] but got <other>.

42000 # INVALID_EXTRACT_FIELD_TYPE

Field name should be a non-null string literal, but it's <extraction>.

42000 # INVALID_FIELD_NAME

Field name <fieldName> is invalid: <path> is not a struct.

42000 # INVALID_INLINE_TABLE

Invalid inline table.

# CANNOT_EVALUATE_EXPRESSION_IN_INLINE_TABLE

Cannot evaluate the expression <expr> in inline table definition.

# FAILED_SQL_EXPRESSION_EVALUATION

Failed to evaluate the SQL expression <sqlExpr>. Please check your syntax and ensure all required tables and columns are available.

# INCOMPATIBLE_TYPES_IN_INLINE_TABLE

Found incompatible types in the column <colName> for inline table.

# NUM_COLUMNS_MISMATCH

Inline table expected <expectedNumCols> columns but found <actualNumCols> columns in row <rowIndex>.

42000 # INVALID_RESET_COMMAND_FORMAT

Expected format is 'RESET' or 'RESET key'. If you want to include special characters in key, please use quotes, e.g., RESET key.

42000 # INVALID_SAVE_MODE

The specified save mode <mode> is invalid. Valid save modes include "append", "overwrite", "ignore", "error", "errorifexists", and "default".

42000 # INVALID_SET_SYNTAX

Expected format is 'SET', 'SET key', or 'SET key=value'. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET key=value.

42000 # INVALID_SQL_SYNTAX

Invalid SQL syntax:

# ANALYZE_TABLE_UNEXPECTED_NOSCAN

ANALYZE TABLE(S) ... COMPUTE STATISTICS ... <ctx> must be either NOSCAN or empty.

# CREATE_ROUTINE_WITH_IF_NOT_EXISTS_AND_REPLACE

Cannot create a routine with both IF NOT EXISTS and REPLACE specified.

# CREATE_TEMP_FUNC_WITH_DATABASE

CREATE TEMPORARY FUNCTION with specifying a database(<database>) is not allowed.

# CREATE_TEMP_FUNC_WITH_IF_NOT_EXISTS

CREATE TEMPORARY FUNCTION with IF NOT EXISTS is not allowed.

# EMPTY_PARTITION_VALUE

Partition key <partKey> must set value.

# FUNCTION_WITH_UNSUPPORTED_SYNTAX

The function <prettyName> does not support <syntax>.

# INVALID_COLUMN_REFERENCE

Expected a column reference for transform <transform>: <expr>.

# INVALID_TABLE_FUNCTION_IDENTIFIER_ARGUMENT_MISSING_PARENTHESES

Syntax error: call to table-valued function is invalid because parentheses are missing around the provided TABLE argument <argumentName>; please surround this with parentheses and try again.

# INVALID_TABLE_VALUED_FUNC_NAME

Table valued function cannot specify database name: <funcName>.

# INVALID_WINDOW_REFERENCE

Window reference <windowName> is not a window specification.

# LATERAL_WITHOUT_SUBQUERY_OR_TABLE_VALUED_FUNC

LATERAL can only be used with subquery and table-valued functions.

# MULTI_PART_NAME

<statement> with multiple part name(<name>) is not allowed.

# OPTION_IS_INVALID

option or property key <key> is invalid; only <supported> are supported

# REPETITIVE_WINDOW_DEFINITION

The definition of window <windowName> is repetitive.

# SHOW_FUNCTIONS_INVALID_PATTERN

Invalid pattern in SHOW FUNCTIONS: <pattern>. It must be a "STRING" literal.

# SHOW_FUNCTIONS_INVALID_SCOPE

SHOW <scope> FUNCTIONS not supported.

# TRANSFORM_WRONG_NUM_ARGS

The transform<transform> requires <expectedNum> parameters but the actual number is <actualNum>.

# UNRESOLVED_WINDOW_REFERENCE

Cannot resolve window reference <windowName>.

# UNSUPPORTED_FUNC_NAME

Unsupported function name <funcName>.

# UNSUPPORTED_SQL_STATEMENT

Unsupported SQL statement: <sqlText>.

# VARIABLE_TYPE_OR_DEFAULT_REQUIRED

The definition of a SQL variable requires either a datatype or a DEFAULT clause. For example, use DECLARE name STRING or DECLARE name = 'SQL' instead of DECLARE name.

42000 # INVALID_USAGE_OF_STAR_OR_REGEX

Invalid usage of <elem> in <prettyName>.

42000 # INVALID_WRITE_DISTRIBUTION

The requested write distribution is invalid.

# PARTITION_NUM_AND_SIZE

The partition number and advisory partition size can't be specified at the same time.

# PARTITION_NUM_WITH_UNSPECIFIED_DISTRIBUTION

The number of partitions can't be specified with unspecified distribution.

# PARTITION_SIZE_WITH_UNSPECIFIED_DISTRIBUTION

The advisory partition size can't be specified with unspecified distribution.

42000 # MULTIPLE_QUERY_RESULT_CLAUSES_WITH_PIPE_OPERATORS

<clause1> and <clause2> cannot coexist in the same SQL pipe operator using '|>'. Please separate the multiple result clauses into separate pipe operators and then retry the query again.

42000 # NON_PARTITION_COLUMN

PARTITION clause cannot contain the non-partition column: <columnName>.

42000 # NOT_NULL_ASSERT_VIOLATION

NULL value appeared in non-nullable field: <walkedTypePath>If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (such as java.lang.Integer instead of int/scala.Int).

42000 # NOT_NULL_CONSTRAINT_VIOLATION

Assigning a NULL is not allowed here.

# ARRAY_ELEMENT

The array <columnPath> is defined to contain only elements that are NOT NULL.

# MAP_VALUE

The map <columnPath> is defined to contain only values that are NOT NULL.

42000 # NO_HANDLER_FOR_UDAF

No handler for UDAF '<functionName>'. Use sparkSession.udf.register(...) instead.

42000 # NULLABLE_COLUMN_OR_FIELD

Column or field <name> is nullable while it's required to be non-nullable.

42000 # NULLABLE_ROW_ID_ATTRIBUTES

Row ID attributes cannot be nullable: <nullableRowIdAttrs>.

42000 # PARTITION_COLUMN_NOT_FOUND_IN_SCHEMA

Partition column <column> not found in schema <schema>. Please provide the existing column for partitioning.

42001 # INVALID_AGNOSTIC_ENCODER

Found an invalid agnostic encoder. Expects an instance of AgnosticEncoder but got <encoderType>. For more information consult '<docroot>/api/java/index.html?org/apache/spark/sql/Encoder.html'.

42001 # INVALID_EXPRESSION_ENCODER

Found an invalid expression encoder. Expects an instance of ExpressionEncoder but got <encoderType>. For more information consult '<docroot>/api/java/index.html?org/apache/spark/sql/Encoder.html'.

42601 # COLUMN_ALIASES_NOT_ALLOWED

Column aliases are not allowed in <op>.

42601 # IDENTIFIER_TOO_MANY_NAME_PARTS

<identifier> is not a valid identifier as it has more than 2 name parts.

42601 # IDENTITY_COLUMNS_DUPLICATED_SEQUENCE_GENERATOR_OPTION

Duplicated IDENTITY column sequence generator option: <sequenceGeneratorOption>.

42601 # ILLEGAL_STATE_STORE_VALUE

Illegal value provided to the State Store

# EMPTY_LIST_VALUE

Cannot write empty list values to State Store for StateName <stateName>.

# NULL_VALUE

Cannot write null values to State Store for StateName <stateName>.

42601 # INVALID_ATTRIBUTE_NAME_SYNTAX

Syntax error in the attribute name: <name>. Check that backticks appear in pairs, a quoted string is a complete name part and use a backtick only inside quoted name parts.

42601 # INVALID_BUCKET_COLUMN_DATA_TYPE

Cannot use <type> for bucket column. Collated data types are not supported for bucketing.

42601 # INVALID_EXTRACT_FIELD

Cannot extract <field> from <expr>.

42601 # INVALID_FORMAT

The format is invalid: <format>.

# CONT_THOUSANDS_SEPS

Thousands separators (, or G) must have digits in between them in the number format.

# CUR_MUST_BEFORE_DEC

Currency characters must appear before any decimal point in the number format.

# CUR_MUST_BEFORE_DIGIT

Currency characters must appear before digits in the number format.

# EMPTY

The number format string cannot be empty.

# ESC_AT_THE_END

The escape character is not allowed to end with.

# ESC_IN_THE_MIDDLE

The escape character is not allowed to precede <char>.

# MISMATCH_INPUT

The input <inputType> <input> does not match the format.

# THOUSANDS_SEPS_MUST_BEFORE_DEC

Thousands separators (, or G) may not appear after the decimal point in the number format.

# UNEXPECTED_TOKEN

Found the unexpected <token> in the format string; the structure of the format string must match: [MI|S] [$] [0|9|G|,]* [.|D] [0|9]* [$] [PR|MI|S].

# WRONG_NUM_DIGIT

The format string requires at least one number digit.

# WRONG_NUM_TOKEN

At most one <token> is allowed in the number format.

42601 # INVALID_PARTITION_OPERATION

The partition command is invalid.

# PARTITION_MANAGEMENT_IS_UNSUPPORTED

Table <name> does not support partition management.

# PARTITION_SCHEMA_IS_EMPTY

Table <name> is not partitioned.

42601 # INVALID_STATEMENT_OR_CLAUSE

The statement or clause: <operation> is not valid.

42601 # INVALID_WINDOW_SPEC_FOR_AGGREGATION_FUNC

Cannot specify ORDER BY or a window frame for <aggFunc>.

42601 # LOCAL_MUST_WITH_SCHEMA_FILE

LOCAL must be used together with the schema of file, but got: <actualSchema>.

42601 # MERGE_WITHOUT_WHEN

There must be at least one WHEN clause in a MERGE statement.

42601 # NOT_ALLOWED_IN_FROM

Not allowed in the FROM clause:

# LATERAL_WITH_PIVOT

LATERAL together with PIVOT.

# LATERAL_WITH_UNPIVOT

LATERAL together with UNPIVOT.

# UNPIVOT_WITH_PIVOT

UNPIVOT together with PIVOT.

42601 # NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE

Not allowed in the pipe WHERE clause:

# WINDOW_CLAUSE

WINDOW clause.

42601 # NOT_A_CONSTANT_STRING

The expression <expr> used for the routine or clause <name> must be a constant STRING which is NOT NULL.

# NOT_CONSTANT

To be considered constant the expression must not depend on any columns, contain a subquery, or invoke a non deterministic function such as rand().

# NULL

The expression evaluates to NULL.

# WRONG_TYPE

The data type of the expression is <dataType>.

42601 # NOT_UNRESOLVED_ENCODER

Unresolved encoder expected, but <attr> was found.

42601 # PARSE_MODE_UNSUPPORTED

The function <funcName> doesn't support the <mode> mode. Acceptable modes are PERMISSIVE and FAILFAST.

42601 # PARSE_SYNTAX_ERROR

Syntax error at or near <error>``<hint>.

42601 # REF_DEFAULT_VALUE_IS_NOT_ALLOWED_IN_PARTITION

References to DEFAULT column values are not allowed within the PARTITION clause.

42601 # SORT_BY_WITHOUT_BUCKETING

sortBy must be used together with bucketBy.

42601 # SPECIFY_BUCKETING_IS_NOT_ALLOWED

A CREATE TABLE without explicit column list cannot specify bucketing information. Please use the form with explicit column list and specify bucketing information. Alternatively, allow bucketing information to be inferred by omitting the clause.

42601 # SPECIFY_PARTITION_IS_NOT_ALLOWED

A CREATE TABLE without explicit column list cannot specify PARTITIONED BY. Please use the form with explicit column list and specify PARTITIONED BY. Alternatively, allow partitioning to be inferred by omitting the PARTITION BY clause.

42601 # STDS_REQUIRED_OPTION_UNSPECIFIED

'<optionName>' must be specified.

42601 # SYNTAX_DISCONTINUED

Support of the clause or keyword: <clause> has been discontinued in this context.

# BANG_EQUALS_NOT

The '!' keyword is only supported as an alias for the prefix operator 'NOT'. Use the 'NOT' keyword instead for infix clauses such as NOT LIKE, NOT IN, NOT BETWEEN, etc. To re-enable the '!' keyword, set "spark.sql.legacy.bangEqualsNot" to "true".

42601 # TRAILING_COMMA_IN_SELECT

Trailing comma detected in SELECT clause. Remove the trailing comma before the FROM clause.

42601 # UNCLOSED_BRACKETED_COMMENT

Found an unclosed bracketed comment. Please, append */ at the end of the comment.

42601 # WINDOW_FUNCTION_WITHOUT_OVER_CLAUSE

Window function <funcName> requires an OVER clause.

42601 # WRITE_STREAM_NOT_ALLOWED

writeStream can be called only on streaming Dataset/DataFrame.

42602 # CIRCULAR_CLASS_REFERENCE

Cannot have circular references in class, but got the circular reference of class <t>.

42602 # DUPLICATED_CTE_NAMES

CTE definition can't have duplicate names: <duplicateNames>.

42602 # INVALID_DELIMITER_VALUE

Invalid value for delimiter.

# DELIMITER_LONGER_THAN_EXPECTED

Delimiter cannot be more than one character: <str>.

# EMPTY_STRING

Delimiter cannot be empty string.

# NULL_VALUE

Delimiter cannot be null.

# SINGLE_BACKSLASH

Single backslash is prohibited. It has special meaning as beginning of an escape sequence. To get the backslash character, pass a string with two backslashes as the delimiter.

# UNSUPPORTED_SPECIAL_CHARACTER

Unsupported special character for delimiter: <str>.

42602 # INVALID_IDENTIFIER

The unquoted identifier <ident> is invalid and must be back quoted as: <ident>. Unquoted identifiers can only contain ASCII letters ('a' - 'z', 'A' - 'Z'), digits ('0' - '9'), and underbar ('_'). Unquoted identifiers must also not start with a digit. Different data sources and meta stores may impose additional restrictions on valid identifiers.

42602 # INVALID_PROPERTY_KEY

<key> is an invalid property key, please use quotes, e.g. SET <key>=<value>.

42602 # INVALID_PROPERTY_VALUE

<value> is an invalid property value, please use quotes, e.g. SET <key>=<value>

42602 # INVALID_SCHEMA_OR_RELATION_NAME

<name> is not a valid name for tables/schemas. Valid names only contain alphabet characters, numbers and _.

42604 # AS_OF_JOIN

Invalid as-of join.

# TOLERANCE_IS_NON_NEGATIVE

The input argument tolerance must be non-negative.

# TOLERANCE_IS_UNFOLDABLE

The input argument tolerance must be a constant.

# UNSUPPORTED_DIRECTION

Unsupported as-of join direction '<direction>'. Supported as-of join direction include: <supported>.

42604 # EMPTY_JSON_FIELD_VALUE

Failed to parse an empty string for data type <dataType>.

42604 # INVALID_ESC

Found an invalid escape string: <invalidEscape>. The escape string must contain only one character.

42604 # INVALID_ESCAPE_CHAR

EscapeChar should be a string literal of length one, but got <sqlExpr>.

42604 # INVALID_TYPED_LITERAL

The value of the typed literal <valueType> is invalid: <value>.

42605 # WRONG_NUM_ARGS

The <functionName> requires <expectedNum> parameters but the actual number is <actualNum>.

# WITHOUT_SUGGESTION

Please, refer to '<docroot>/sql-ref-functions.html' for a fix.

# WITH_SUGGESTION

If you have to call this function with <legacyNum> parameters, set the legacy configuration <legacyConfKey> to <legacyConfValue>.

42607 # NESTED_AGGREGATE_FUNCTION

It is not allowed to use an aggregate function in the argument of another aggregate function. Please use the inner aggregate function in a sub-query.

42608 # DEFAULT_PLACEMENT_INVALID

A DEFAULT keyword in a MERGE, INSERT, UPDATE, or SET VARIABLE command could not be directly assigned to a target column because it was part of an expression. For example: UPDATE SET c1 = DEFAULT is allowed, but UPDATE T SET c1 = DEFAULT + 1 is not allowed.

42608 # NO_DEFAULT_COLUMN_VALUE_AVAILABLE

Can't determine the default value for <colName> since it is not nullable and it has no default value.

42611 # CANNOT_ASSIGN_EVENT_TIME_COLUMN_WITHOUT_WATERMARK

Watermark needs to be defined to reassign event time column. Failed to find watermark definition in the streaming query.

42611 # IDENTITY_COLUMNS_ILLEGAL_STEP

IDENTITY column step cannot be 0.

42613 # INCOMPATIBLE_JOIN_TYPES

The join types <joinType1> and <joinType2> are incompatible.

42613 # INVALID_JOIN_TYPE_FOR_JOINWITH

Invalid join type in joinWith: <joinType>.

42613 # INVALID_LATERAL_JOIN_TYPE

The <joinType> JOIN with LATERAL correlation is not allowed because an OUTER subquery cannot correlate to its join partner. Remove the LATERAL correlation or use an INNER JOIN, or LEFT OUTER JOIN instead.

42613 # INVALID_QUERY_MIXED_QUERY_PARAMETERS

Parameterized query must either use positional, or named parameters, but not both.

42613 # INVALID_SINGLE_VARIANT_COLUMN

The singleVariantColumn option cannot be used if there is also a user specified schema.

42613 # NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION

When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.

42613 # NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION

When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.

42613 # NON_LAST_NOT_MATCHED_BY_TARGET_CLAUSE_OMIT_CONDITION

When there are more than one NOT MATCHED [BY TARGET] clauses in a MERGE statement, only the last NOT MATCHED [BY TARGET] clause can omit the condition.

42613 # STDS_CONFLICT_OPTIONS

The options <options> cannot be specified together. Please specify the one.

42614 # DUPLICATE_CLAUSES

Found duplicate clauses: <clauseName>. Please, remove one of them.

42614 # REPEATED_CLAUSE

The <clause> clause may be used at most once per <operation> operation.

42616 # STDS_INVALID_OPTION_VALUE

Invalid value for source option '<optionName>':

# IS_EMPTY

cannot be empty.

# IS_NEGATIVE

cannot be negative.

# WITH_MESSAGE

<message>

42617 # PARSE_EMPTY_STATEMENT

Syntax error, unexpected empty statement.

42621 # UNSUPPORTED_EXPRESSION_GENERATED_COLUMN

Cannot create generated column <fieldName> with generation expression <expressionStr> because <reason>.

42623 # ADD_DEFAULT_UNSUPPORTED

Failed to execute <statementType> command because DEFAULT values are not supported when adding new columns to previously existing target data source with table provider: "<dataSource>".

42623 # DEFAULT_UNSUPPORTED

Failed to execute <statementType> command because DEFAULT values are not supported for target data source with table provider: "<dataSource>".

42623 # GENERATED_COLUMN_WITH_DEFAULT_VALUE

A column cannot have both a default value and a generation expression but column <colName> has default value: (<defaultValue>) and generation expression: (<genExpr>).

42623 # IDENTITY_COLUMN_WITH_DEFAULT_VALUE

A column cannot have both a default value and an identity column specification but column <colName> has default value: (<defaultValue>) and identity column specification: (<identityColumnSpec>).

42623 # INVALID_DEFAULT_VALUE

Failed to execute <statement> command because the destination column or variable <colName> has a DEFAULT value <defaultValue>,

# DATA_TYPE

which requires <expectedType> type, but the statement provided a value of incompatible <actualType> type.

# NOT_CONSTANT

which is not a constant expression whose equivalent value is known at query planning time.

# SUBQUERY_EXPRESSION

which contains subquery expressions.

# UNRESOLVED_EXPRESSION

which fails to resolve as a valid expression.

42701 # DUPLICATE_ASSIGNMENTS

The columns or variables <nameList> appear more than once as assignment targets.

42701 # EXEC_IMMEDIATE_DUPLICATE_ARGUMENT_ALIASES

The USING clause of this EXECUTE IMMEDIATE command contained multiple arguments with same alias (<aliases>), which is invalid; please update the command to specify unique aliases and then try it again.

42702 # AMBIGUOUS_COLUMN_OR_FIELD

Column or field <name> is ambiguous and has <n> matches.

42702 # AMBIGUOUS_COLUMN_REFERENCE

Column <name> is ambiguous. It's because you joined several DataFrame together, and some of these DataFrames are the same. This column points to one of the DataFrames but Spark is unable to figure out which one. Please alias the DataFrames with different names via DataFrame.alias before joining them, and specify the column using qualified name, e.g. df.alias("a").join(df.alias("b"), col("a.id") > col("b.id")).

42702 # AMBIGUOUS_LATERAL_COLUMN_ALIAS

Lateral column alias <name> is ambiguous and has <n> matches.

42702 # EXCEPT_OVERLAPPING_COLUMNS

Columns in an EXCEPT list must be distinct and non-overlapping, but got (<columns>).

42703 # COLUMN_NOT_DEFINED_IN_TABLE

<colType> column <colName> is not defined in table <tableName>, defined table columns are: <tableCols>.

42703 # COLUMN_NOT_FOUND

The column <colName> cannot be found. Verify the spelling and correctness of the column name according to the SQL config <caseSensitiveConfig>.

42703 # UNRESOLVED_COLUMN

A column, variable, or function parameter with name <objectName> cannot be resolved.

# WITHOUT_SUGGESTION
# WITH_SUGGESTION

Did you mean one of the following? [<proposal>].

42703 # UNRESOLVED_FIELD

A field with name <fieldName> cannot be resolved with the struct-type column <columnPath>.

# WITHOUT_SUGGESTION
# WITH_SUGGESTION

Did you mean one of the following? [<proposal>].

42703 # UNRESOLVED_MAP_KEY

Cannot resolve column <objectName> as a map key. If the key is a string literal, add the single quotes '' around it.

# WITHOUT_SUGGESTION
# WITH_SUGGESTION

Otherwise did you mean one of the following column(s)? [<proposal>].

42703 # UNRESOLVED_USING_COLUMN_FOR_JOIN

USING column <colName> cannot be resolved on the <side> side of the join. The <side>-side columns: [<suggestion>].

42704 # AMBIGUOUS_REFERENCE

Reference <name> is ambiguous, could be: <referenceNames>.

42704 # CANNOT_RESOLVE_DATAFRAME_COLUMN

Cannot resolve dataframe column <name>. It's probably because of illegal references like df1.select(df2.col("a")).

42704 # CANNOT_RESOLVE_STAR_EXPAND

Cannot resolve <targetString>.* given input columns <columns>. Please check that the specified table or struct exists and is accessible in the input columns.

42704 # CODEC_SHORT_NAME_NOT_FOUND

Cannot find a short name for the codec <codecName>.

42704 # COLLATION_INVALID_NAME

The value <collationName> does not represent a correct collation name. Suggested valid collation names: [<proposals>].

42704 # COLLATION_INVALID_PROVIDER

The value <provider> does not represent a correct collation provider. Supported providers are: [<supportedProviders>].

42704 # DATA_SOURCE_NOT_EXIST

Data source '<provider>' not found. Please make sure the data source is registered.

42704 # DEFAULT_DATABASE_NOT_EXISTS

Default database <defaultDatabase> does not exist, please create it first or change default database to <defaultDatabase>.

42704 # ENCODER_NOT_FOUND

Not found an encoder of the type <typeName> to Spark SQL internal representation. Consider to change the input type to one of supported at '<docroot>/sql-ref-datatypes.html'.

42704 # FIELD_NOT_FOUND

No such struct field <fieldName> in <fields>.

42704 # INDEX_NOT_FOUND

Cannot find the index <indexName> on table <tableName>.

42704 # SCHEMA_NOT_FOUND

The schema <schemaName> cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS.

42704 # UNRECOGNIZED_SQL_TYPE

Unrecognized SQL type - name: <typeName>, id: <jdbcType>.

42704 # UNRECOGNIZED_STATISTIC

The statistic <stats> is not recognized. Valid statistics include count, count_distinct, approx_count_distinct, mean, stddev, min, max, and percentile values. Percentile must be a numeric value followed by '%', within the range 0% to 100%.

42710 # ALTER_TABLE_COLUMN_DESCRIPTOR_DUPLICATE

ALTER TABLE <type> column <columnName> specifies descriptor "<optionName>" more than once, which is invalid.

42710 # CREATE_TABLE_COLUMN_DESCRIPTOR_DUPLICATE

CREATE TABLE column <columnName> specifies descriptor "<optionName>" more than once, which is invalid.

42710 # DATA_SOURCE_ALREADY_EXISTS

Data source '<provider>' already exists. Please choose a different name for the new data source.

42710 # DUPLICATED_METRICS_NAME

The metric name is not unique: <metricName>. The same name cannot be used for metrics with different results. However multiple instances of metrics with with same result and name are allowed (e.g. self-joins).

42710 # FIELD_ALREADY_EXISTS

Cannot <op> column, because <fieldNames> already exists in <struct>.

42710 # FOUND_MULTIPLE_DATA_SOURCES

Detected multiple data sources with the name '<provider>'. Please check the data source isn't simultaneously registered and located in the classpath.

42710 # INDEX_ALREADY_EXISTS

Cannot create the index <indexName> on table <tableName> because it already exists.

42710 # LOCATION_ALREADY_EXISTS

Cannot name the managed table as <identifier>, as its associated location <location> already exists. Please pick a different table name, or remove the existing location first.

42710 # MULTIPLE_XML_DATA_SOURCE

Detected multiple data sources with the name <provider> (<sourceNames>). Please specify the fully qualified class name or remove <externalSource> from the classpath.

42711 # COLUMN_ALREADY_EXISTS

The column <columnName> already exists. Choose another name or rename the existing column.

42713 # DUPLICATED_FIELD_NAME_IN_ARROW_STRUCT

Duplicated field names in Arrow Struct are not allowed, got <fieldNames>.

42713 # STATIC_PARTITION_COLUMN_IN_INSERT_COLUMN_LIST

Static partition column <staticName> is also specified in the column list.

42723 # ROUTINE_ALREADY_EXISTS

Cannot create the <newRoutineType> <routineName> because a <existingRoutineType> of that name already exists. Choose a different name, drop or replace the existing <existingRoutineType>, or add the IF NOT EXISTS clause to tolerate a pre-existing <newRoutineType>.

42723 # VARIABLE_ALREADY_EXISTS

Cannot create the variable <variableName> because it already exists. Choose a different name, or drop or replace the existing variable.

4274K # DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT

Call to routine <routineName> is invalid because it includes multiple argument assignments to the same parameter name <parameterName>.

# BOTH_POSITIONAL_AND_NAMED

A positional argument and named argument both referred to the same parameter. Please remove the named argument referring to this parameter.

# DOUBLE_NAMED_ARGUMENT_REFERENCE

More than one named argument referred to the same parameter. Please assign a value only once.

4274K # NAMED_PARAMETERS_NOT_SUPPORTED

Named parameters are not supported for function <functionName>; please retry the query with positional arguments to the function call instead.

4274K # REQUIRED_PARAMETER_NOT_FOUND

Cannot invoke routine <routineName> because the parameter named <parameterName> is required, but the routine call did not supply a value. Please update the routine call to supply an argument value (either positionally at index <index> or by name) and retry the query again.

4274K # UNEXPECTED_POSITIONAL_ARGUMENT

Cannot invoke routine <routineName> because it contains positional argument(s) following the named argument assigned to <parameterName>; please rearrange them so the positional arguments come first and then retry the query again.

4274K # UNRECOGNIZED_PARAMETER_NAME

Cannot invoke routine <routineName> because the routine call included a named argument reference for the argument named <argumentName>, but this routine does not include any signature containing an argument with this name. Did you mean one of the following? [<proposal>].

42802 # ASSIGNMENT_ARITY_MISMATCH

The number of columns or variables assigned or aliased: <numTarget> does not match the number of source expressions: <numExpr>.

42802 # STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_HANDLE_STATE

Failed to perform stateful processor operation=<operationType> with invalid handle state=<handleState>.

42802 # STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_TIME_MODE

Failed to perform stateful processor operation=<operationType> with invalid timeMode=<timeMode>

42802 # STATEFUL_PROCESSOR_DUPLICATE_STATE_VARIABLE_DEFINED

State variable with name <stateVarName> has already been defined in the StatefulProcessor.

42802 # STATEFUL_PROCESSOR_INCORRECT_TIME_MODE_TO_ASSIGN_TTL

Cannot use TTL for state=<stateName> in timeMode=<timeMode>, use TimeMode.ProcessingTime() instead.

42802 # STATEFUL_PROCESSOR_TTL_DURATION_MUST_BE_POSITIVE

TTL duration must be greater than zero for State store operation=<operationType> on state=<stateName>.

42802 # STATEFUL_PROCESSOR_UNKNOWN_TIME_MODE

Unknown time mode <timeMode>. Accepted timeMode modes are 'none', 'processingTime', 'eventTime'

42802 # STATE_STORE_CANNOT_CREATE_COLUMN_FAMILY_WITH_RESERVED_CHARS

Failed to create column family with unsupported starting character and name=<colFamilyName>.

42802 # STATE_STORE_CANNOT_USE_COLUMN_FAMILY_WITH_INVALID_NAME

Failed to perform column family operation=<operationName> with invalid name=<colFamilyName>. Column family name cannot be empty or include leading/trailing spaces or use the reserved keyword=default

42802 # STATE_STORE_COLUMN_FAMILY_SCHEMA_INCOMPATIBLE

Incompatible schema transformation with column family=<colFamilyName>, oldSchema=<oldSchema>, newSchema=<newSchema>.

42802 # STATE_STORE_HANDLE_NOT_INITIALIZED

The handle has not been initialized for this StatefulProcessor. Please only use the StatefulProcessor within the transformWithState operator.

42802 # STATE_STORE_INCORRECT_NUM_ORDERING_COLS_FOR_RANGE_SCAN

Incorrect number of ordering ordinals=<numOrderingCols> for range scan encoder. The number of ordering ordinals cannot be zero or greater than number of schema columns.

42802 # STATE_STORE_INCORRECT_NUM_PREFIX_COLS_FOR_PREFIX_SCAN

Incorrect number of prefix columns=<numPrefixCols> for prefix scan encoder. Prefix columns cannot be zero or greater than or equal to num of schema columns.

42802 # STATE_STORE_NULL_TYPE_ORDERING_COLS_NOT_SUPPORTED

Null type ordering column with name=<fieldName> at index=<index> is not supported for range scan encoder.

42802 # STATE_STORE_UNSUPPORTED_OPERATION_ON_MISSING_COLUMN_FAMILY

State store operation=<operationType> not supported on missing column family=<colFamilyName>.

42802 # STATE_STORE_VARIABLE_SIZE_ORDERING_COLS_NOT_SUPPORTED

Variable size ordering column with name=<fieldName> at index=<index> is not supported for range scan encoder.

42802 # UDTF_ALIAS_NUMBER_MISMATCH

The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF. Expected <aliasesSize> aliases, but got <aliasesNames>. Please ensure that the number of aliases provided matches the number of columns output by the UDTF.

42802 # UDTF_INVALID_ALIAS_IN_REQUESTED_ORDERING_STRING_FROM_ANALYZE_METHOD

Failed to evaluate the user-defined table function because its 'analyze' method returned a requested OrderingColumn whose column name expression included an unnecessary alias <aliasName>; please remove this alias and then try the query again.

42802 # UDTF_INVALID_REQUESTED_SELECTED_EXPRESSION_FROM_ANALYZE_METHOD_REQUIRES_ALIAS

Failed to evaluate the user-defined table function because its 'analyze' method returned a requested 'select' expression (<expression>) that does not include a corresponding alias; please update the UDTF to specify an alias there and then try the query again.

42803 # GROUPING_COLUMN_MISMATCH

Column of grouping (<grouping>) can't be found in grouping columns <groupingColumns>.

42803 # GROUPING_ID_COLUMN_MISMATCH

Columns of grouping_id (<groupingIdColumn>) does not match grouping columns (<groupByColumns>).

42803 # MISSING_AGGREGATION

The non-aggregating expression <expression> is based on columns which are not participating in the GROUP BY clause. Add the columns or the expression to the GROUP BY, aggregate the expression, or use <expressionAnyValue> if you do not care which of the values within a group is returned.

42803 # MISSING_GROUP_BY

The query does not include a GROUP BY clause. Add GROUP BY or turn it into the window functions using OVER clauses.

42803 # UNRESOLVED_ALL_IN_GROUP_BY

Cannot infer grouping columns for GROUP BY ALL based on the select clause. Please explicitly specify the grouping columns.

42804 # INVALID_CORRUPT_RECORD_TYPE

The column <columnName> for corrupt records must have the nullable STRING type, but got <actualType>.

42804 # TRANSPOSE_INVALID_INDEX_COLUMN

Invalid index column for TRANSPOSE because: <reason>

42805 # GROUP_BY_POS_OUT_OF_RANGE

GROUP BY position <index> is not in select list (valid range is [1, <size>]).

42805 # ORDER_BY_POS_OUT_OF_RANGE

ORDER BY position <index> is not in select list (valid range is [1, <size>]).

42809 # EXPECT_PERMANENT_VIEW_NOT_TEMP

'<operation>' expects a permanent view but <viewName> is a temp view.

42809 # EXPECT_TABLE_NOT_VIEW

'<operation>' expects a table but <viewName> is a view.

# NO_ALTERNATIVE
# USE_ALTER_VIEW

Please use ALTER VIEW instead.

42809 # EXPECT_VIEW_NOT_TABLE

The table <tableName> does not support <operation>.

# NO_ALTERNATIVE
# USE_ALTER_TABLE

Please use ALTER TABLE instead.

42809 # FORBIDDEN_OPERATION

The operation <statement> is not allowed on the <objectType>: <objectName>.

42809 # NOT_A_PARTITIONED_TABLE

Operation <operation> is not allowed for <tableIdentWithDB> because it is not a partitioned table.

42809 # UNSUPPORTED_INSERT

Can't insert into the target.

# MULTI_PATH

Can only write data to relations with a single path but given paths are <paths>.

# NOT_ALLOWED

The target relation <relationId> does not allow insertion.

# NOT_PARTITIONED

The target relation <relationId> is not partitioned.

# RDD_BASED

An RDD-based table is not allowed.

# READ_FROM

The target relation <relationId> is also being read from.

42809 # WRONG_COMMAND_FOR_OBJECT_TYPE

The operation <operation> requires a <requiredType>. But <objectName> is a <foundType>. Use <alternative> instead.

42815 # EMITTING_ROWS_OLDER_THAN_WATERMARK_NOT_ALLOWED

Previous node emitted a row with eventTime=<emittedRowEventTime> which is older than current_watermark_value=<currentWatermark> This can lead to correctness issues in the stateful operators downstream in the execution pipeline. Please correct the operator logic to emit rows after current global watermark value.

42818 # INCOMPARABLE_PIVOT_COLUMN

Invalid pivot column <columnName>. Pivot columns must be comparable.

42822 # EXPRESSION_TYPE_IS_NOT_ORDERABLE

Column expression <expr> cannot be sorted because its type <exprType> is not orderable.

42822 # GROUP_EXPRESSION_TYPE_IS_NOT_ORDERABLE

The expression <sqlExpr> cannot be used as a grouping expression because its data type <dataType> is not an orderable data type.

42823 # INVALID_SUBQUERY_EXPRESSION

Invalid subquery:

# SCALAR_SUBQUERY_RETURN_MORE_THAN_ONE_OUTPUT_COLUMN

Scalar subquery must return only one column, but got <number>.

42825 # CANNOT_MERGE_INCOMPATIBLE_DATA_TYPE

Failed to merge incompatible data types <left> and <right>. Please check the data types of the columns being merged and ensure that they are compatible. If necessary, consider casting the columns to compatible data types before attempting the merge.

42825 # INCOMPATIBLE_COLUMN_TYPE

<operator> can only be performed on tables with compatible column types. The <columnOrdinalNumber> column of the <tableOrdinalNumber> table is <dataType1> type which is not compatible with <dataType2> at the same column of the first table.<hint>.

42826 # NUM_COLUMNS_MISMATCH

<operator> can only be performed on inputs with the same number of columns, but the first input has <firstNumColumns> columns and the <invalidOrdinalNum> input has <invalidNumColumns> columns.

42826 # NUM_TABLE_VALUE_ALIASES_MISMATCH

Number of given aliases does not match number of output columns. Function name: <funcName>; number of aliases: <aliasesNum>; number of output columns: <outColsNum>.

42845 # AGGREGATE_FUNCTION_WITH_NONDETERMINISTIC_EXPRESSION

Non-deterministic expression <sqlExpr> should not appear in the arguments of an aggregate function.

42846 # CANNOT_CAST_DATATYPE

Cannot cast <sourceType> to <targetType>.

42846 # CANNOT_CONVERT_PROTOBUF_FIELD_TYPE_TO_SQL_TYPE

Cannot convert Protobuf <protobufColumn> to SQL <sqlColumn> because schema is incompatible (protobufType = <protobufType>, sqlType = <sqlType>).

42846 # CANNOT_CONVERT_PROTOBUF_MESSAGE_TYPE_TO_SQL_TYPE

Unable to convert <protobufType> of Protobuf to SQL type <toType>.

42846 # CANNOT_CONVERT_SQL_TYPE_TO_PROTOBUF_FIELD_TYPE

Cannot convert SQL <sqlColumn> to Protobuf <protobufColumn> because schema is incompatible (protobufType = <protobufType>, sqlType = <sqlType>).

42846 # CANNOT_CONVERT_SQL_VALUE_TO_PROTOBUF_ENUM_TYPE

Cannot convert SQL <sqlColumn> to Protobuf <protobufColumn> because <data> is not in defined values for enum: <enumString>.

42846 # CANNOT_UP_CAST_DATATYPE

Cannot up cast <expression> from <sourceType> to <targetType>. <details>

42846 # EXPRESSION_DECODING_FAILED

Failed to decode a row to a value of the expressions: <expressions>.

42846 # EXPRESSION_ENCODING_FAILED

Failed to encode a value of the expressions: <expressions> to a row.

42846 # INVALID_PARTITION_VALUE

Failed to cast value <value> to data type <dataType> for partition column <columnName>. Ensure the value matches the expected data type for this partition column.

42846 # PARQUET_CONVERSION_FAILURE

Unable to create a Parquet converter for the data type <dataType> whose Parquet type is <parquetType>.

# DECIMAL

Parquet DECIMAL type can only be backed by INT32, INT64, FIXED_LEN_BYTE_ARRAY, or BINARY.

# UNSUPPORTED

Please modify the conversion making sure it is supported.

# WITHOUT_DECIMAL_METADATA

Please read this column/field as Spark BINARY type.

42846 # PARQUET_TYPE_ILLEGAL

Illegal Parquet type: <parquetType>.

42846 # PARQUET_TYPE_NOT_RECOGNIZED

Unrecognized Parquet type: <field>.

42846 # PARQUET_TYPE_NOT_SUPPORTED

Parquet type not yet supported: <parquetType>.

42846 # UNEXPECTED_SERIALIZER_FOR_CLASS

The class <className> has an unexpected expression serializer. Expects "STRUCT" or "IF" which returns "STRUCT" but found <expr>.

42883 # ROUTINE_NOT_FOUND

The routine <routineName> cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog. To tolerate the error on drop use DROP ... IF EXISTS.

42883 # UNRESOLVABLE_TABLE_VALUED_FUNCTION

Could not resolve <name> to a table-valued function. Please make sure that <name> is defined as a table-valued function and that all required parameters are provided correctly. If <name> is not defined, please create the table-valued function before using it. For more information about defining table-valued functions, please refer to the Apache Spark documentation.

42883 # UNRESOLVED_ROUTINE

Cannot resolve routine <routineName> on search path <searchPath>.

42883 # UNRESOLVED_VARIABLE

Cannot resolve variable <variableName> on search path <searchPath>.

42883 # VARIABLE_NOT_FOUND

The variable <variableName> cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog. To tolerate the error on drop use DROP VARIABLE IF EXISTS.

428C4 # UNPIVOT_VALUE_SIZE_MISMATCH

All unpivot value columns must have the same size as there are value column names (<names>).

428EK # TEMP_VIEW_NAME_TOO_MANY_NAME_PARTS

CREATE TEMPORARY VIEW or the corresponding Dataset APIs only accept single-part view names, but got: <actualName>.

428FR # CANNOT_ALTER_COLLATION_BUCKET_COLUMN

ALTER TABLE (ALTER|CHANGE) COLUMN cannot change collation of type/subtypes of bucket columns, but found the bucket column <columnName> in the table <tableName>.

428FR # CANNOT_ALTER_PARTITION_COLUMN

ALTER TABLE (ALTER|CHANGE) COLUMN is not supported for partition columns, but found the partition column <columnName> in the table <tableName>.

428FT # PARTITIONS_ALREADY_EXIST

Cannot ADD or RENAME TO partition(s) <partitionList> in table <tableName> because they already exist. Choose a different name, drop the existing partition, or add the IF NOT EXISTS clause to tolerate a pre-existing partition.

428FT # PARTITIONS_NOT_FOUND

The partition(s) <partitionList> cannot be found in table <tableName>. Verify the partition specification and table name. To tolerate the error on drop use ALTER TABLE … DROP IF EXISTS PARTITION.

428H2 # EXCEPT_NESTED_COLUMN_INVALID_TYPE

EXCEPT column <columnName> was resolved and expected to be StructType, but found type <dataType>.

428H2 # IDENTITY_COLUMNS_UNSUPPORTED_DATA_TYPE

DataType <dataType> is not supported for IDENTITY columns.

42902 # UNSUPPORTED_OVERWRITE

Can't overwrite the target that is also being read from.

# PATH

The target path is <path>.

# TABLE

The target table is <table>.

42903 # GROUP_BY_AGGREGATE

Aggregate functions are not allowed in GROUP BY, but found <sqlExpr>.

42903 # GROUP_BY_POS_AGGREGATE

GROUP BY <index> refers to an expression <aggExpr> that contains an aggregate function. Aggregate functions are not allowed in GROUP BY.

42903 # INVALID_AGGREGATE_FILTER

The FILTER expression <filterExpr> in an aggregate function is invalid.

# CONTAINS_AGGREGATE

Expected a FILTER expression without an aggregation, but found <aggExpr>.

# CONTAINS_WINDOW_FUNCTION

Expected a FILTER expression without a window function, but found <windowExpr>.

# NON_DETERMINISTIC

Expected a deterministic FILTER expression.

# NOT_BOOLEAN

Expected a FILTER expression of the BOOLEAN type.

42903 # INVALID_WHERE_CONDITION

The WHERE condition <condition> contains invalid expressions: <expressionList>. Rewrite the query to avoid window functions, aggregate functions, and generator functions in the WHERE clause.

42908 # SPECIFY_CLUSTER_BY_WITH_BUCKETING_IS_NOT_ALLOWED

Cannot specify both CLUSTER BY and CLUSTERED BY INTO BUCKETS.

42908 # SPECIFY_CLUSTER_BY_WITH_PARTITIONED_BY_IS_NOT_ALLOWED

Cannot specify both CLUSTER BY and PARTITIONED BY.

429BB # CANNOT_RECOGNIZE_HIVE_TYPE

Cannot recognize hive type string: <fieldType>, column: <fieldName>. The specified data type for the field cannot be recognized by Spark SQL. Please check the data type of the specified field and ensure that it is a valid Spark SQL data type. Refer to the Spark SQL documentation for a list of valid data types and their format. If the data type is correct, please ensure that you are using a supported version of Spark SQL.

42K01 # DATATYPE_MISSING_SIZE

DataType <type> requires a length parameter, for example <type>(10). Please specify the length.

42K01 # INCOMPLETE_TYPE_DEFINITION

Incomplete complex type:

# ARRAY

The definition of "ARRAY" type is incomplete. You must provide an element type. For example: "ARRAY<elementType>".

# MAP

The definition of "MAP" type is incomplete. You must provide a key type and a value type. For example: "MAP<TIMESTAMP, INT>".

# STRUCT

The definition of "STRUCT" type is incomplete. You must provide at least one field type. For example: "STRUCT<name STRING, phone DECIMAL(10, 0)>".

42K02 # DATA_SOURCE_NOT_FOUND

Failed to find the data source: <provider>. Make sure the provider name is correct and the package is properly registered and compatible with your Spark version.

42K03 # BATCH_METADATA_NOT_FOUND

Unable to find batch <batchMetadataFile>.

42K03 # CANNOT_LOAD_PROTOBUF_CLASS

Could not load Protobuf class with name <protobufClassName>. <explanation>.

42K03 # DATA_SOURCE_TABLE_SCHEMA_MISMATCH

The schema of the data source table does not match the expected schema. If you are using the DataFrameReader.schema API or creating a table, avoid specifying the schema. Data Source schema: <dsSchema> Expected schema: <expectedSchema>

42K03 # LOAD_DATA_PATH_NOT_EXISTS

LOAD DATA input path does not exist: <path>.

42K03 # PATH_NOT_FOUND

Path does not exist: <path>.

42K03 # RENAME_SRC_PATH_NOT_FOUND

Failed to rename as <sourcePath> was not found.

42K03 # STDS_FAILED_TO_READ_OPERATOR_METADATA

Failed to read the operator metadata for checkpointLocation=<checkpointLocation> and batchId=<batchId>. Either the file does not exist, or the file is corrupted. Rerun the streaming query to construct the operator metadata, and report to the corresponding communities or vendors if the error persists.

42K03 # STDS_FAILED_TO_READ_STATE_SCHEMA

Failed to read the state schema. Either the file does not exist, or the file is corrupted. options: <sourceOptions>. Rerun the streaming query to construct the state schema, and report to the corresponding communities or vendors if the error persists.

42K03 # STREAMING_STATEFUL_OPERATOR_NOT_MATCH_IN_STATE_METADATA

Streaming stateful operator name does not match with the operator in state metadata. This likely to happen when user adds/removes/changes stateful operator of existing streaming query. Stateful operators in the metadata: [<OpsInMetadataSeq>]; Stateful operators in current batch: [<OpsInCurBatchSeq>].

42K04 # FAILED_RENAME_PATH

Failed to rename <sourcePath> to <targetPath> as destination already exists.

42K04 # PATH_ALREADY_EXISTS

Path <outputPath> already exists. Set mode as "overwrite" to overwrite the existing path.

42K05 # INVALID_EMPTY_LOCATION

The location name cannot be empty string, but <location> was given.

42K05 # REQUIRES_SINGLE_PART_NAMESPACE

<sessionCatalog> requires a single-part namespace, but got <namespace>.

42K05 # SHOW_COLUMNS_WITH_CONFLICT_NAMESPACE

SHOW COLUMNS with conflicting namespaces: <namespaceA> != <namespaceB>.

42K06 # INVALID_OPTIONS

Invalid options:

# NON_MAP_FUNCTION

Must use the map() function for options.

# NON_STRING_TYPE

A type of keys and values in map() must be string, but got <mapType>.

42K06 # STATE_STORE_INVALID_CONFIG_AFTER_RESTART

Cannot change <configName> from <oldConfig> to <newConfig> between restarts. Please set <configName> to <oldConfig>, or restart with a new checkpoint directory.

42K06 # STATE_STORE_INVALID_PROVIDER

The given State Store Provider <inputClass> does not extend org.apache.spark.sql.execution.streaming.state.StateStoreProvider.

42K06 # STATE_STORE_INVALID_VARIABLE_TYPE_CHANGE

Cannot change <stateVarName> to <newType> between query restarts. Please set <stateVarName> to <oldType>, or restart with a new checkpoint directory.

42K06 # STATE_STORE_PROVIDER_DOES_NOT_SUPPORT_FINE_GRAINED_STATE_REPLAY

The given State Store Provider <inputClass> does not extend org.apache.spark.sql.execution.streaming.state.SupportsFineGrainedReplay. Therefore, it does not support option snapshotStartBatchId or readChangeFeed in state data source.

42K07 # INVALID_SCHEMA

The input schema <inputSchema> is not a valid schema string.

# NON_STRING_LITERAL

The input expression must be string literal and not null.

# NON_STRUCT_TYPE

The input expression should be evaluated to struct type, but got <dataType>.

# PARSE_ERROR

Cannot parse the schema: <reason>

42K08 # INVALID_SQL_ARG

The argument <name> of sql() is invalid. Consider to replace it either by a SQL literal or by collection constructor functions such as map(), array(), struct().

42K08 # NON_FOLDABLE_ARGUMENT

The function <funcName> requires the parameter <paramName> to be a foldable expression of the type <paramType>, but the actual argument is a non-foldable.

42K08 # NON_LITERAL_PIVOT_VALUES

Literal expressions required for pivot values, found <expression>.

42K08 # SEED_EXPRESSION_IS_UNFOLDABLE

The seed expression <seedExpr> of the expression <exprWithSeed> must be foldable.

42K09 # COMPLEX_EXPRESSION_UNSUPPORTED_INPUT

Cannot process input data types for the expression: <expression>.

# BAD_INPUTS

The input data types to <functionName> must be valid, but found the input types <dataType>.

# MISMATCHED_TYPES

All input types must be the same except nullable, containsNull, valueContainsNull flags, but found the input types <inputTypes>.

# NO_INPUTS

The collection of input data types must not be empty.

42K09 # DATATYPE_MISMATCH

Cannot resolve <sqlExpr> due to data type mismatch:

# ARRAY_FUNCTION_DIFF_TYPES

Input to <functionName> should have been <dataType> followed by a value with same element type, but it's [<leftType>, <rightType>].

# BINARY_ARRAY_DIFF_TYPES

Input to function <functionName> should have been two <arrayType> with same element type, but it's [<leftType>, <rightType>].

# BINARY_OP_DIFF_TYPES

the left and right operands of the binary operator have incompatible types (<left> and <right>).

# BINARY_OP_WRONG_TYPE

the binary operator requires the input type <inputType>, not <actualDataType>.

# BLOOM_FILTER_BINARY_OP_WRONG_TYPE

The Bloom filter binary input to <functionName> should be either a constant value or a scalar subquery expression, but it's <actual>.

# BLOOM_FILTER_WRONG_TYPE

Input to function <functionName> should have been <expectedLeft> followed by value with <expectedRight>, but it's [<actual>].

# CANNOT_CONVERT_TO_JSON

Unable to convert column <name> of type <type> to JSON.

# CANNOT_DROP_ALL_FIELDS

Cannot drop all fields in struct.

# CAST_WITHOUT_SUGGESTION

cannot cast <srcType> to <targetType>.

# CAST_WITH_CONF_SUGGESTION

cannot cast <srcType> to <targetType> with ANSI mode on. If you have to cast <srcType> to <targetType>, you can set <config> as <configVal>.

# CAST_WITH_FUNC_SUGGESTION

cannot cast <srcType> to <targetType>. To convert values from <srcType> to <targetType>, you can use the functions <functionNames> instead.

# CREATE_MAP_KEY_DIFF_TYPES

The given keys of function <functionName> should all be the same type, but they are <dataType>.

# CREATE_MAP_VALUE_DIFF_TYPES

The given values of function <functionName> should all be the same type, but they are <dataType>.

# CREATE_NAMED_STRUCT_WITHOUT_FOLDABLE_STRING

Only foldable STRING expressions are allowed to appear at odd position, but they are <inputExprs>.

# DATA_DIFF_TYPES

Input to <functionName> should all be the same type, but it's <dataType>.

# FILTER_NOT_BOOLEAN

Filter expression <filter> of type <type> is not a boolean.

# HASH_MAP_TYPE

Input to the function <functionName> cannot contain elements of the "MAP" type. In Spark, same maps may have different hashcode, thus hash expressions are prohibited on "MAP" elements. To restore previous behavior set "spark.sql.legacy.allowHashOnMapType" to "true".

# HASH_VARIANT_TYPE

Input to the function <functionName> cannot contain elements of the "VARIANT" type yet.

# INPUT_SIZE_NOT_ONE

Length of <exprName> should be 1.

# INVALID_ARG_VALUE

The <inputName> value must to be a <requireType> literal of <validValues>, but got <inputValue>.

# INVALID_JSON_MAP_KEY_TYPE

Input schema <schema> can only contain STRING as a key type for a MAP.

# INVALID_JSON_SCHEMA

Input schema <schema> must be a struct, an array, a map or a variant.

# INVALID_MAP_KEY_TYPE

The key of map cannot be/contain <keyType>.

# INVALID_ORDERING_TYPE

The <functionName> does not support ordering on type <dataType>.

# INVALID_ROW_LEVEL_OPERATION_ASSIGNMENTS

<errors>

# INVALID_XML_MAP_KEY_TYPE

Input schema <schema> can only contain STRING as a key type for a MAP.

# IN_SUBQUERY_DATA_TYPE_MISMATCH

The data type of one or more elements in the left hand side of an IN subquery is not compatible with the data type of the output of the subquery. Mismatched columns: [<mismatchedColumns>], left side: [<leftType>], right side: [<rightType>].

# IN_SUBQUERY_LENGTH_MISMATCH

The number of columns in the left hand side of an IN subquery does not match the number of columns in the output of subquery. Left hand side columns(length: <leftLength>): [<leftColumns>], right hand side columns(length: <rightLength>): [<rightColumns>].

# MAP_CONCAT_DIFF_TYPES

The <functionName> should all be of type map, but it's <dataType>.

# MAP_FUNCTION_DIFF_TYPES

Input to <functionName> should have been <dataType> followed by a value with same key type, but it's [<leftType>, <rightType>].

# MAP_ZIP_WITH_DIFF_TYPES

Input to the <functionName> should have been two maps with compatible key types, but it's [<leftType>, <rightType>].

# NON_FOLDABLE_INPUT

the input <inputName> should be a foldable <inputType> expression; however, got <inputExpr>.

# NON_STRING_TYPE

all arguments of the function <funcName> must be strings.

# NULL_TYPE

Null typed values cannot be used as arguments of <functionName>.

# PARAMETER_CONSTRAINT_VIOLATION

The <leftExprName>(<leftExprValue>) must be <constraint> the <rightExprName>(<rightExprValue>).

# RANGE_FRAME_INVALID_TYPE

The data type <orderSpecType> used in the order specification does not support the data type <valueBoundaryType> which is used in the range frame.

# RANGE_FRAME_MULTI_ORDER

A range window frame with value boundaries cannot be used in a window specification with multiple order by expressions: <orderSpec>.

# RANGE_FRAME_WITHOUT_ORDER

A range window frame cannot be used in an unordered window specification.

# SEQUENCE_WRONG_INPUT_TYPES

<functionName> uses the wrong parameter type. The parameter type must conform to: 1. The start and stop expressions must resolve to the same type. 2. If start and stop expressions resolve to the <startType> type, then the step expression must resolve to the <stepType> type. 3. Otherwise, if start and stop expressions resolve to the <otherStartType> type, then the step expression must resolve to the same type.

# SPECIFIED_WINDOW_FRAME_DIFF_TYPES

Window frame bounds <lower> and <upper> do not have the same type: <lowerType> <> <upperType>.

# SPECIFIED_WINDOW_FRAME_INVALID_BOUND

Window frame upper bound <upper> does not follow the lower bound <lower>.

# SPECIFIED_WINDOW_FRAME_UNACCEPTED_TYPE

The data type of the <location> bound <exprType> does not match the expected data type <expectedType>.

# SPECIFIED_WINDOW_FRAME_WITHOUT_FOLDABLE

Window frame <location> bound <expression> is not a literal.

# SPECIFIED_WINDOW_FRAME_WRONG_COMPARISON

The lower bound of a window frame must be <comparison> to the upper bound.

# STACK_COLUMN_DIFF_TYPES

The data type of the column (<columnIndex>) do not have the same type: <leftType> (<leftParamIndex>) <> <rightType> (<rightParamIndex>).

# TYPE_CHECK_FAILURE_WITH_HINT

<msg>``<hint>.

# UNEXPECTED_CLASS_TYPE

class <className> not found.

# UNEXPECTED_INPUT_TYPE

The <paramIndex> parameter requires the <requiredType> type, however <inputSql> has the type <inputType>.

# UNEXPECTED_NULL

The <exprName> must not be null.

# UNEXPECTED_RETURN_TYPE

The <functionName> requires return <expectedType> type, but the actual is <actualType> type.

# UNEXPECTED_STATIC_METHOD

cannot find a static method <methodName> that matches the argument types in <className>.

# UNSUPPORTED_INPUT_TYPE

The input of <functionName> can't be <dataType> type data.

# VALUE_OUT_OF_RANGE

The <exprName> must be between <valueRange> (current value = <currentValue>).

# WRONG_NUM_ARG_TYPES

The expression requires <expectedNum> argument types but the actual number is <actualNum>.

# WRONG_NUM_ENDPOINTS

The number of endpoints must be >= 2 to construct intervals but the actual number is <actualNumber>.

42K09 # EVENT_TIME_IS_NOT_ON_TIMESTAMP_TYPE

The event time <eventName> has the invalid type <eventType>, but expected "TIMESTAMP".

42K09 # INVALID_VARIABLE_TYPE_FOR_QUERY_EXECUTE_IMMEDIATE

Variable type must be string type but got <varType>.

42K09 # PIVOT_VALUE_DATA_TYPE_MISMATCH

Invalid pivot value '<value>': value data type <valueType> does not match pivot column data type <pivotType>.

42K09 # TRANSPOSE_NO_LEAST_COMMON_TYPE

Transpose requires non-index columns to share a least common type, but <dt1> and <dt2> do not.

42K09 # UNEXPECTED_INPUT_TYPE

Parameter <paramIndex> of function <functionName> requires the <requiredType> type, however <inputSql> has the type <inputType>.

42K09 # UNPIVOT_VALUE_DATA_TYPE_MISMATCH

Unpivot value columns must share a least common type, some types do not: [<types>].

42K0A # UNPIVOT_REQUIRES_ATTRIBUTES

UNPIVOT requires all given <given> expressions to be columns when no <empty> expressions are given. These are not columns: [<expressions>].

42K0A # UNPIVOT_REQUIRES_VALUE_COLUMNS

At least one value column needs to be specified for UNPIVOT, all columns specified as ids.

42K0B # INCONSISTENT_BEHAVIOR_CROSS_VERSION

You may get a different result due to the upgrading to

# DATETIME_PATTERN_RECOGNITION

Spark >= 3.0: Fail to recognize <pattern> pattern in the DateTimeFormatter. 1) You can set <config> to "LEGACY" to restore the behavior before Spark 3.0. 2) You can form a valid datetime pattern with the guide from '<docroot>/sql-ref-datetime-pattern.html'.

# DATETIME_WEEK_BASED_PATTERN

Spark >= 3.0: All week-based patterns are unsupported since Spark 3.0, detected week-based character: <c>. Please use the SQL function EXTRACT instead.

# PARSE_DATETIME_BY_NEW_PARSER

Spark >= 3.0: Fail to parse <datetime> in the new parser. You can set <config> to "LEGACY" to restore the behavior before Spark 3.0, or set to "CORRECTED" and treat it as an invalid datetime string.

# READ_ANCIENT_DATETIME

Spark >= 3.0: reading dates before 1582-10-15 or timestamps before 1900-01-01T00:00:00Z from <format> files can be ambiguous, as the files may be written by Spark 2.x or legacy versions of Hive, which uses a legacy hybrid calendar that is different from Spark 3.0+'s Proleptic Gregorian calendar. See more details in SPARK-31404. You can set the SQL config <config> or the datasource option <option> to "LEGACY" to rebase the datetime values w.r.t. the calendar difference during reading. To read the datetime values as it is, set the SQL config or the datasource option to "CORRECTED".

# WRITE_ANCIENT_DATETIME

Spark >= 3.0: writing dates before 1582-10-15 or timestamps before 1900-01-01T00:00:00Z into <format> files can be dangerous, as the files may be read by Spark 2.x or legacy versions of Hive later, which uses a legacy hybrid calendar that is different from Spark 3.0+'s Proleptic Gregorian calendar. See more details in SPARK-31404. You can set <config> to "LEGACY" to rebase the datetime values w.r.t. the calendar difference during writing, to get maximum interoperability. Or set the config to "CORRECTED" to write the datetime values as it is, if you are sure that the written files will only be read by Spark 3.0+ or other systems that use Proleptic Gregorian calendar.

42K0D # INVALID_LAMBDA_FUNCTION_CALL

Invalid lambda function call.

# DUPLICATE_ARG_NAMES

The lambda function has duplicate arguments <args>. Please, consider to rename the argument names or set <caseSensitiveConfig> to "true".

# NON_HIGHER_ORDER_FUNCTION

A lambda function should only be used in a higher order function. However, its class is <class>, which is not a higher order function.

# NUM_ARGS_MISMATCH

A higher order function expects <expectedNumArgs> arguments, but got <actualNumArgs>.

42K0E # INVALID_LIMIT_LIKE_EXPRESSION

The limit like expression <expr> is invalid.

# DATA_TYPE

The <name> expression must be integer type, but got <dataType>.

# IS_NEGATIVE

The <name> expression must be equal to or greater than 0, but got <v>.

# IS_NULL

The evaluated <name> expression must not be null.

# IS_UNFOLDABLE

The <name> expression must evaluate to a constant value.

42K0E # INVALID_NON_DETERMINISTIC_EXPRESSIONS

The operator expects a deterministic expression, but the actual expression is <sqlExprs>.

42K0E # INVALID_OBSERVED_METRICS

Invalid observed metrics.

# AGGREGATE_EXPRESSION_WITH_DISTINCT_UNSUPPORTED

Aggregate expressions with DISTINCT are not allowed in observed metrics, but found: <expr>.

# AGGREGATE_EXPRESSION_WITH_FILTER_UNSUPPORTED

Aggregate expression with FILTER predicate are not allowed in observed metrics, but found: <expr>.

# MISSING_NAME

The observed metrics should be named: <operator>.

# NESTED_AGGREGATES_UNSUPPORTED

Nested aggregates are not allowed in observed metrics, but found: <expr>.

# NON_AGGREGATE_FUNC_ARG_IS_ATTRIBUTE

Attribute <expr> can only be used as an argument to an aggregate function.

# NON_AGGREGATE_FUNC_ARG_IS_NON_DETERMINISTIC

Non-deterministic expression <expr> can only be used as an argument to an aggregate function.

# WINDOW_EXPRESSIONS_UNSUPPORTED

Window expressions are not allowed in observed metrics, but found: <expr>.

42K0E # INVALID_TIME_TRAVEL_SPEC

Cannot specify both version and timestamp when time travelling the table.

42K0E # INVALID_TIME_TRAVEL_TIMESTAMP_EXPR

The time travel timestamp expression <expr> is invalid.

# INPUT

Cannot be casted to the "TIMESTAMP" type.

# NON_DETERMINISTIC

Must be deterministic.

# OPTION

Timestamp string in the options must be able to cast to TIMESTAMP type.

# UNEVALUABLE

Must be evaluable.

42K0E # JOIN_CONDITION_IS_NOT_BOOLEAN_TYPE

The join condition <joinCondition> has the invalid type <conditionType>, expected "BOOLEAN".

42K0E # MULTIPLE_TIME_TRAVEL_SPEC

Cannot specify time travel in both the time travel clause and options.

42K0E # MULTI_ALIAS_WITHOUT_GENERATOR

Multi part aliasing (<names>) is not supported with <expr> as it is not a generator function.

42K0E # MULTI_SOURCES_UNSUPPORTED_FOR_EXPRESSION

The expression <expr> does not support more than one source.

42K0E # NO_MERGE_ACTION_SPECIFIED

df.mergeInto needs to be followed by at least one of whenMatched/whenNotMatched/whenNotMatchedBySource.

42K0E # UNSUPPORTED_EXPR_FOR_OPERATOR

A query operator contains one or more unsupported expressions. Consider to rewrite it to avoid window functions, aggregate functions, and generator functions in the WHERE clause. Invalid expressions: [<invalidExprSqls>]

42K0E # UNSUPPORTED_EXPR_FOR_PARAMETER

A query parameter contains unsupported expression. Parameters can either be variables or literals. Invalid expression: [<invalidExprSql>]

42K0E # UNSUPPORTED_GENERATOR

The generator is not supported:

# MULTI_GENERATOR

only one generator allowed per SELECT clause but found <num>: <generators>.

# NESTED_IN_EXPRESSIONS

nested in expressions <expression>.

# NOT_GENERATOR

<functionName> is expected to be a generator. However, its class is <classCanonicalName>, which is not a generator.

# OUTSIDE_SELECT

outside the SELECT clause, found: <plan>.

42K0E # UNSUPPORTED_GROUPING_EXPRESSION

grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup.

42K0E # UNSUPPORTED_MERGE_CONDITION

MERGE operation contains unsupported <condName> condition.

# AGGREGATE

Aggregates are not allowed: <cond>.

# NON_DETERMINISTIC

Non-deterministic expressions are not allowed: <cond>.

# SUBQUERY

Subqueries are not allowed: <cond>.

42K0E # UNTYPED_SCALA_UDF

You're using untyped Scala UDF, which does not have the input type information. Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType), the result is 0 for null input. To get rid of this error, you could: 1. use typed Scala UDF APIs(without return type parameter), e.g. udf((x: Int) => x). 2. use Java UDF APIs, e.g. udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType), if input types are all non primitive. 3. set "spark.sql.legacy.allowUntypedScalaUDF" to "true" and use this API with caution.

42K0E # WINDOW_FUNCTION_AND_FRAME_MISMATCH

<funcName> function can only be evaluated in an ordered row-based window frame with a single offset: <windowExpr>.

42K0F # INVALID_TEMP_OBJ_REFERENCE

Cannot create the persistent object <objName> of the type <obj> because it references to the temporary object <tempObjName> of the type <tempObj>. Please make the temporary object <tempObjName> persistent, or make the persistent object <objName> temporary.

42K0G # PROTOBUF_DEPENDENCY_NOT_FOUND

Could not find dependency: <dependencyName>.

42K0G # PROTOBUF_DESCRIPTOR_FILE_NOT_FOUND

Error reading Protobuf descriptor file at path: <filePath>.

42K0G # PROTOBUF_FIELD_MISSING

Searching for <field> in Protobuf schema at <protobufSchema> gave <matchSize> matches. Candidates: <matches>.

42K0G # PROTOBUF_FIELD_MISSING_IN_SQL_SCHEMA

Found <field> in Protobuf schema but there is no match in the SQL schema.

42K0G # PROTOBUF_FIELD_TYPE_MISMATCH

Type mismatch encountered for field: <field>.

42K0G # PROTOBUF_MESSAGE_NOT_FOUND

Unable to locate Message <messageName> in Descriptor.

42K0G # PROTOBUF_TYPE_NOT_SUPPORT

Protobuf type not yet supported: <protobufType>.

42K0G # RECURSIVE_PROTOBUF_SCHEMA

Found recursive reference in Protobuf schema, which can not be processed by Spark by default: <fieldDescriptor>. try setting the option recursive.fields.max.depth 1 to 10. Going beyond 10 levels of recursion is not allowed.

42K0G # UNABLE_TO_CONVERT_TO_PROTOBUF_MESSAGE_TYPE

Unable to convert SQL type <toType> to Protobuf type <protobufType>.

42K0G # UNKNOWN_PROTOBUF_MESSAGE_TYPE

Attempting to treat <descriptorName> as a Message, but it was <containingType>.

42K0H # RECURSIVE_VIEW

Recursive view <viewIdent> detected (cycle: <newPath>).

42K0I # SQL_CONF_NOT_FOUND

The SQL config <sqlConf> cannot be found. Please verify that the config exists.

42K0K # INVALID_WITHIN_GROUP_EXPRESSION

Invalid function <funcName> with WITHIN GROUP.

# DISTINCT_UNSUPPORTED

The function does not support DISTINCT with WITHIN GROUP.

# MISMATCH_WITH_DISTINCT_INPUT

The function is invoked with DISTINCT and WITHIN GROUP but expressions <funcArg> and <orderingExpr> do not match. The WITHIN GROUP ordering expression must be picked from the function inputs.

# WITHIN_GROUP_MISSING

WITHIN GROUP is required for the function.

# WRONG_NUM_ORDERINGS

The function requires <expectedNum> orderings in WITHIN GROUP but got <actualNum>.

42K0L # END_LABEL_WITHOUT_BEGIN_LABEL

End label <endLabel> can not exist without begin label.

42K0L # INVALID_LABEL_USAGE

The usage of the label <labelName> is invalid.

# DOES_NOT_EXIST

Label was used in the <statementType> statement, but the label does not belong to any surrounding block.

# ITERATE_IN_COMPOUND

ITERATE statement cannot be used with a label that belongs to a compound (BEGIN...END) body.

42K0L # LABELS_MISMATCH

Begin label <beginLabel> does not match the end label <endLabel>.

42K0L # LABEL_ALREADY_EXISTS

The label <label> already exists. Choose another name or rename the existing label.

42K0M # INVALID_VARIABLE_DECLARATION

Invalid variable declaration.

# NOT_ALLOWED_IN_SCOPE

Declaration of the variable <varName> is not allowed in this scope.

# ONLY_AT_BEGINNING

Variable <varName> can only be declared at the beginning of the compound.

42K0N # INVALID_EXTERNAL_TYPE

The external type <externalType> is not valid for the type <type> at the expression <expr>.

42K0O # SCALAR_FUNCTION_NOT_COMPATIBLE

ScalarFunction <scalarFunc> not overrides method 'produceResult(InternalRow)' with custom implementation.

42K0P # SCALAR_FUNCTION_NOT_FULLY_IMPLEMENTED

ScalarFunction <scalarFunc> not implements or overrides method 'produceResult(InternalRow)'.

42KD0 # AMBIGUOUS_ALIAS_IN_NESTED_CTE

Name <name> is ambiguous in nested CTE. Please set <config> to "CORRECTED" so that name defined in inner CTE takes precedence. If set it to "LEGACY", outer CTE definitions will take precedence. See '<docroot>/sql-migration-guide.html#query-engine'.

42KD9 # CANNOT_MERGE_SCHEMAS

Failed merging schemas: Initial schema: <left> Schema that cannot be merged with the initial schema: <right>.

42KD9 # UNABLE_TO_INFER_SCHEMA

Unable to infer schema for <format>. It must be specified manually.

42KDE # CALL_ON_STREAMING_DATASET_UNSUPPORTED

The method <methodName> can not be called on streaming Dataset/DataFrame.

42KDE # CANNOT_CREATE_DATA_SOURCE_TABLE

Failed to create data source table <tableName>:

# EXTERNAL_METADATA_UNSUPPORTED

provider '<provider>' does not support external metadata but a schema is provided. Please remove the schema when creating the table.

42KDE # INVALID_WRITER_COMMIT_MESSAGE

The data source writer has generated an invalid number of commit messages. Expected exactly one writer commit message from each task, but received <detail>.

42KDE # NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING

Window function is not supported in <windowFunc> (as column <columnName>) on streaming DataFrames/Datasets. Structured Streaming only supports time-window aggregation using the WINDOW function. (window specification: <windowSpec>)

42KDE # STREAMING_OUTPUT_MODE

Invalid streaming output mode: <outputMode>.

# INVALID

Accepted output modes are 'Append', 'Complete', 'Update'.

# UNSUPPORTED_DATASOURCE

This output mode is not supported in Data Source <className>.

# UNSUPPORTED_OPERATION

This output mode is not supported for <operation> on streaming DataFrames/DataSets.

42KDF # XML_ROW_TAG_MISSING

<rowTag> option is required for reading files in XML format.

42P01 # TABLE_OR_VIEW_NOT_FOUND

The table or view <relationName> cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog. To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS.

42P01 # VIEW_NOT_FOUND

The view <relationName> cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog. To tolerate the error on drop use DROP VIEW IF EXISTS.

42P02 # UNBOUND_SQL_PARAMETER

Found the unbound parameter: <name>. Please, fix args and provide a mapping of the parameter to either a SQL literal or collection constructor functions such as map(), array(), struct().

42P06 # SCHEMA_ALREADY_EXISTS

Cannot create schema <schemaName> because it already exists. Choose a different name, drop the existing schema, or add the IF NOT EXISTS clause to tolerate pre-existing schema.

42P07 # TABLE_OR_VIEW_ALREADY_EXISTS

Cannot create table or view <relationName> because it already exists. Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

42P07 # TEMP_TABLE_OR_VIEW_ALREADY_EXISTS

Cannot create the temporary view <relationName> because it already exists. Choose a different name, drop or replace the existing view, or add the IF NOT EXISTS clause to tolerate pre-existing views.

42P07 # VIEW_ALREADY_EXISTS

Cannot create view <relationName> because it already exists. Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

42P08 # CATALOG_NOT_FOUND

The catalog <catalogName> not found. Consider to set the SQL config <config> to a catalog plugin.

42P10 # CLUSTERING_COLUMNS_MISMATCH

Specified clustering does not match that of the existing table <tableName>. Specified clustering columns: [<specifiedClusteringString>]. Existing clustering columns: [<existingClusteringString>].

42P20 # MISSING_WINDOW_SPECIFICATION

Window specification is not defined in the WINDOW clause for <windowName>. For more information about WINDOW clauses, please refer to '<docroot>/sql-ref-syntax-qry-select-window.html'.

42P20 # UNSUPPORTED_EXPR_FOR_WINDOW

Expression <sqlExpr> not supported within a window function.

42P21 # COLLATION_MISMATCH

Could not determine which collation to use for string functions and operators.

# EXPLICIT

Error occurred due to the mismatch between explicit collations: [<explicitTypes>]. Decide on a single explicit collation and remove others.

# IMPLICIT

Error occurred due to the mismatch between implicit collations: [<implicitTypes>]. Use COLLATE function to set the collation explicitly.

42P22 # INDETERMINATE_COLLATION

Function called requires knowledge of the collation it should apply, but indeterminate collation was found. Use COLLATE function to set the collation explicitly.

42S22 # NO_SQL_TYPE_IN_PROTOBUF_SCHEMA

Cannot find <catalystFieldPath> in Protobuf schema.

42S23 # PARTITION_TRANSFORM_EXPRESSION_NOT_IN_PARTITIONED_BY

The expression <expression> must be inside 'partitionedBy'.

46103 # CANNOT_LOAD_FUNCTION_CLASS

Cannot load class <className> when registering the function <functionName>, please make sure it is on the classpath.

46110 # CANNOT_MODIFY_CONFIG

Cannot modify the value of the Spark config: <key>. See also '<docroot>/sql-migration-guide.html#ddl-statements'.

46121 # INVALID_COLUMN_NAME_AS_PATH

The datasource <datasource> cannot save the column <columnName> because its name contains some characters that are not allowed in file paths. Please, use an alias to rename it.

46121 # INVALID_JAVA_IDENTIFIER_AS_FIELD_NAME

<fieldName> is not a valid identifier of Java and cannot be used as field name <walkedTypePath>.

51024 # INCOMPATIBLE_VIEW_SCHEMA_CHANGE

The SQL query of view <viewName> has an incompatible schema change and column <colName> cannot be resolved. Expected <expectedNum> columns named <colName> but got <actualCols>. Please try to re-create the view by running: <suggestion>.

53200 # UNABLE_TO_ACQUIRE_MEMORY

Unable to acquire <requestedBytes> bytes of memory, got <receivedBytes>.

54000 # COLLECTION_SIZE_LIMIT_EXCEEDED

Can't create array with <numberOfElements> elements which exceeding the array size limit <maxRoundedArrayLength>,

# FUNCTION

unsuccessful try to create arrays in the function <functionName>.

# INITIALIZE

cannot initialize an array with specified parameters.

# PARAMETER

the value of parameter(s) <parameter> in the function <functionName> is invalid.

54000 # GROUPING_SIZE_LIMIT_EXCEEDED

Grouping sets size cannot be greater than <maxSize>.

54001 # FAILED_TO_PARSE_TOO_COMPLEX

The statement, including potential SQL functions and referenced views, was too complex to parse. To mitigate this error divide the statement into multiple, less complex chunks.

54006 # EXCEED_LIMIT_LENGTH

Exceeds char/varchar type length limitation: <limit>.

54006 # KRYO_BUFFER_OVERFLOW

Kryo serialization failed: <exceptionMsg>. To avoid this, increase "<bufferSizeConfKey>" value.

54006 # TRANSPOSE_EXCEED_ROW_LIMIT

Number of rows exceeds the allowed limit of <maxValues> for TRANSPOSE. If this was intended, set <config> to at least the current row count.

54011 # TUPLE_SIZE_EXCEEDS_LIMIT

Due to Scala's limited support of tuple, tuples with more than 22 elements are not supported.

54023 # TABLE_VALUED_FUNCTION_TOO_MANY_TABLE_ARGUMENTS

There are too many table arguments for table-valued function. It allows one table argument, but got: <num>. If you want to allow it, please set "spark.sql.allowMultipleTableArguments.enabled" to "true"

54K00 # VIEW_EXCEED_MAX_NESTED_DEPTH

The depth of view <viewName> exceeds the maximum view resolution depth (<maxNestedDepth>). Analysis is aborted to avoid errors. If you want to work around this, please try to increase the value of "spark.sql.view.maxNestedViewDepth".

56000 # CHECKPOINT_RDD_BLOCK_ID_NOT_FOUND

Checkpoint block <rddBlockId> not found! Either the executor that originally checkpointed this partition is no longer alive, or the original RDD is unpersisted. If this problem persists, you may consider using rdd.checkpoint() instead, which is slower than local checkpointing but more fault-tolerant.

56038 # CODEC_NOT_AVAILABLE

The codec <codecName> is not available.

# WITH_AVAILABLE_CODECS_SUGGESTION

Available codecs are <availableCodecs>.

# WITH_CONF_SUGGESTION

Consider to set the config <configKey> to <configVal>.

56038 # FEATURE_NOT_ENABLED

The feature <featureName> is not enabled. Consider setting the config <configKey> to <configValue> to enable this capability.

56038 # GET_TABLES_BY_TYPE_UNSUPPORTED_BY_HIVE_VERSION

Hive 2.2 and lower versions don't support getTablesByType. Please use Hive 2.3 or higher version.

56038 # INCOMPATIBLE_DATASOURCE_REGISTER

Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>

56K00 # CONNECT

Generic Spark Connect error.

# INTERCEPTOR_CTOR_MISSING

Cannot instantiate GRPC interceptor because <cls> is missing a default constructor without arguments.

# INTERCEPTOR_RUNTIME_ERROR

Error instantiating GRPC interceptor: <msg>

# PLUGIN_CTOR_MISSING

Cannot instantiate Spark Connect plugin because <cls> is missing a default constructor without arguments.

# PLUGIN_RUNTIME_ERROR

Error instantiating Spark Connect plugin: <msg>

# SESSION_NOT_SAME

Both Datasets must belong to the same SparkSession.

58030 # CANNOT_LOAD_STATE_STORE

An error occurred during loading state.

# CANNOT_FIND_BASE_SNAPSHOT_CHECKPOINT

Cannot find a base snapshot checkpoint with lineage: <lineage>.

# CANNOT_READ_CHECKPOINT

Cannot read RocksDB checkpoint metadata. Expected <expectedVersion>, but found <actualVersion>.

# CANNOT_READ_DELTA_FILE_KEY_SIZE

Error reading delta file <fileToRead> of <clazz>: key size cannot be <keySize>.

# CANNOT_READ_DELTA_FILE_NOT_EXISTS

Error reading delta file <fileToRead> of <clazz>: <fileToRead> does not exist.

# CANNOT_READ_MISSING_SNAPSHOT_FILE

Error reading snapshot file <fileToRead> of <clazz>: <fileToRead> does not exist.

# CANNOT_READ_SNAPSHOT_FILE_KEY_SIZE

Error reading snapshot file <fileToRead> of <clazz>: key size cannot be <keySize>.

# CANNOT_READ_SNAPSHOT_FILE_VALUE_SIZE

Error reading snapshot file <fileToRead> of <clazz>: value size cannot be <valueSize>.

# CANNOT_READ_STREAMING_STATE_FILE

Error reading streaming state file of <fileToRead> does not exist. If the stream job is restarted with a new or updated state operation, please create a new checkpoint location or clear the existing checkpoint location.

# HDFS_STORE_PROVIDER_OUT_OF_MEMORY

Could not load HDFS state store with id <stateStoreId> because of an out of memory exception.

# INVALID_CHANGE_LOG_READER_VERSION

The change log reader version cannot be <version>. The checkpoint probably is from a future Spark version, please upgrade your Spark.

# INVALID_CHANGE_LOG_WRITER_VERSION

The change log writer version cannot be <version>.

# ROCKSDB_STORE_PROVIDER_OUT_OF_MEMORY

Could not load RocksDB state store with id <stateStoreId> because of an out of memory exception.

# SNAPSHOT_PARTITION_ID_NOT_FOUND

Partition id <snapshotPartitionId> not found for state of operator <operatorId> at <checkpointLocation>.

# UNCATEGORIZED
# UNEXPECTED_FILE_SIZE

Copied <dfsFile> to <localFile>, expected <expectedSize> bytes, found <localFileSize> bytes.

# UNEXPECTED_VERSION

Version cannot be <version> because it is less than 0.

# UNRELEASED_THREAD_ERROR

<loggingId>: RocksDB instance could not be acquired by <newAcquiredThreadInfo> for operationType=<operationType> as it was not released by <acquiredThreadInfo> after <timeWaitedMs> ms. Thread holding the lock has trace: <stackTraceOutput>

58030 # CANNOT_RESTORE_PERMISSIONS_FOR_PATH

Failed to set permissions on created path <path> back to <permission>.

58030 # CANNOT_WRITE_STATE_STORE

Error writing state store files for provider <providerClass>.

# CANNOT_COMMIT

Cannot perform commit during state checkpoint.

58030 # FAILED_RENAME_TEMP_FILE

Failed to rename temp file <srcPath> to <dstPath> as FileSystem.rename returned false.

58030 # INVALID_BUCKET_FILE

Invalid bucket file: <path>.

58030 # TASK_WRITE_FAILED

Task failed while writing rows to <path>.

58030 # UNABLE_TO_FETCH_HIVE_TABLES

Unable to fetch tables of Hive database: <dbName>.

F0000 # INVALID_DRIVER_MEMORY

System memory <systemMemory> must be at least <minSystemMemory>. Please increase heap size using the --driver-memory option or "<config>" in Spark configuration.

F0000 # INVALID_EXECUTOR_MEMORY

Executor memory <executorMemory> must be at least <minSystemMemory>. Please increase executor memory using the --executor-memory option or "<config>" in Spark configuration.

F0000 # INVALID_KRYO_SERIALIZER_BUFFER_SIZE

The value of the config "<bufferSizeConfKey>" must be less than 2048 MiB, but got <bufferSizeConfValue> MiB.

HV000 # FAILED_JDBC

Failed JDBC <url> on the operation:

# ALTER_TABLE

Alter the table <tableName>.

# CREATE_INDEX

Create the index <indexName> in the <tableName> table.

# CREATE_NAMESPACE

Create the namespace <namespace>.

# CREATE_NAMESPACE_COMMENT

Create a comment on the namespace: <namespace>.

# CREATE_TABLE

Create the table <tableName>.

# DROP_INDEX

Drop the index <indexName> in the <tableName> table.

# DROP_NAMESPACE

Drop the namespace <namespace>.

# GET_TABLES

Get tables from the namespace: <namespace>.

# LIST_NAMESPACES

List namespaces.

# LOAD_TABLE

Load the table <tableName>.

# NAMESPACE_EXISTS

Check that the namespace <namespace> exists.

# REMOVE_NAMESPACE_COMMENT

Remove a comment on the namespace: <namespace>.

# RENAME_TABLE

Rename the table <oldName> to <newName>.

# TABLE_EXISTS

Check that the table <tableName> exists.

# UNCLASSIFIED

<message>

HV091 # NONEXISTENT_FIELD_NAME_IN_LIST

Field(s) <nonExistFields> do(es) not exist. Available fields: <fieldNames>

HY000 # INVALID_HANDLE

The handle <handle> is invalid.

# FORMAT

Handle must be an UUID string of the format '00112233-4455-6677-8899-aabbccddeeff'

# OPERATION_ABANDONED

Operation was considered abandoned because of inactivity and removed.

# OPERATION_ALREADY_EXISTS

Operation already exists.

# OPERATION_NOT_FOUND

Operation not found.

# SESSION_CHANGED

The existing Spark server driver instance has restarted. Please reconnect.

# SESSION_CLOSED

Session was closed.

# SESSION_NOT_FOUND

Session not found.

HY000 # MISSING_TIMEOUT_CONFIGURATION

The operation has timed out, but no timeout duration is configured. To set a processing time-based timeout, use 'GroupState.setTimeoutDuration()' in your 'mapGroupsWithState' or 'flatMapGroupsWithState' operation. For event-time-based timeout, use 'GroupState.setTimeoutTimestamp()' and define a watermark using 'Dataset.withWatermark()'.

HY008 # OPERATION_CANCELED

Operation has been canceled.

HY109 # INVALID_CURSOR

The cursor is invalid.

# DISCONNECTED

The cursor has been disconnected by the server.

# NOT_REATTACHABLE

The cursor is not reattachable.

# POSITION_NOT_AVAILABLE

The cursor position id <responseId> is no longer available at index <index>.

# POSITION_NOT_FOUND

The cursor position id <responseId> is not found.

KD000 # FAILED_REGISTER_CLASS_WITH_KRYO

Failed to register classes with Kryo.

KD000 # GRAPHITE_SINK_INVALID_PROTOCOL

Invalid Graphite protocol: <protocol>.

KD000 # GRAPHITE_SINK_PROPERTY_MISSING

Graphite sink requires '<property>' property.

KD000 # INCOMPATIBLE_DATA_FOR_TABLE

Cannot write incompatible data for the table <tableName>:

# AMBIGUOUS_COLUMN_NAME

Ambiguous column name in the input data <colName>.

# CANNOT_FIND_DATA

Cannot find data for the output column <colName>.

# CANNOT_SAFELY_CAST

Cannot safely cast <colName> <srcType> to <targetType>.

# EXTRA_COLUMNS

Cannot write extra columns <extraColumns>.

# EXTRA_STRUCT_FIELDS

Cannot write extra fields <extraFields> to the struct <colName>.

# NULLABLE_ARRAY_ELEMENTS

Cannot write nullable elements to array of non-nulls: <colName>.

# NULLABLE_COLUMN

Cannot write nullable values to non-null column <colName>.

# NULLABLE_MAP_VALUES

Cannot write nullable values to map of non-nulls: <colName>.

# STRUCT_MISSING_FIELDS

Struct <colName> missing fields: <missingFields>.

# UNEXPECTED_COLUMN_NAME

Struct <colName> <order>-th field name does not match (may be out of order): expected <expected>, found <found>.

KD000 # MALFORMED_CSV_RECORD

Malformed CSV record: <badRecord>

KD001 # FAILED_READ_FILE

Encountered error while reading file <path>.

# FILE_NOT_EXIST

File does not exist. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.

# NO_HINT
# PARQUET_COLUMN_DATA_TYPE_MISMATCH

Data type mismatches when reading Parquet column <column>. Expected Spark type <expectedType>, actual Parquet type <actualType>.

# UNSUPPORTED_FILE_SYSTEM

The file system <fileSystemClass> hasn't implemented <method>.

KD005 # ALL_PARTITION_COLUMNS_NOT_ALLOWED

Cannot use all columns for partition columns.

KD006 # STDS_COMMITTED_BATCH_UNAVAILABLE

No committed batch found, checkpoint location: <checkpointLocation>. Ensure that the query has run and committed any microbatch before stopping.

KD006 # STDS_NO_PARTITION_DISCOVERED_IN_STATE_STORE

The state does not have any partition. Please double check that the query points to the valid state. options: <sourceOptions>

KD006 # STDS_OFFSET_LOG_UNAVAILABLE

The offset log for <batchId> does not exist, checkpoint location: <checkpointLocation>. Please specify the batch ID which is available for querying - you can query the available batch IDs via using state metadata data source.

KD006 # STDS_OFFSET_METADATA_LOG_UNAVAILABLE

Metadata is not available for offset log for <batchId>, checkpoint location: <checkpointLocation>. The checkpoint seems to be only run with older Spark version(s). Run the streaming query with the recent Spark version, so that Spark constructs the state metadata.

KD009 # CONFLICTING_DIRECTORY_STRUCTURES

Conflicting directory structures detected. Suspicious paths: <discoveredBasePaths> If provided paths are partition directories, please set "basePath" in the options of the data source to specify the root directory of the table. If there are multiple root directories, please load them separately and then union them.

KD009 # CONFLICTING_PARTITION_COLUMN_NAMES

Conflicting partition column names detected: <distinctPartColLists> For partitioned table directories, data files should only live in leaf directories. And directories at the same level should have the same partition column name. Please check the following directories for unexpected files or inconsistent partition column names: <suspiciousPaths>

KD00B # ERROR_READING_AVRO_UNKNOWN_FINGERPRINT

Error reading avro data -- encountered an unknown fingerprint: <fingerprint>, not sure what schema to use. This could happen if you registered additional schemas after starting your spark context.

KD010 # DATA_SOURCE_EXTERNAL_ERROR

Encountered error when saving to external data source.

P0001 # USER_RAISED_EXCEPTION

<errorMessage>

P0001 # USER_RAISED_EXCEPTION_PARAMETER_MISMATCH

The raise_error() function was used to raise error class: <errorClass> which expects parameters: <expectedParms>. The provided parameters <providedParms> do not match the expected parameters. Please make sure to provide all expected parameters.

P0001 # USER_RAISED_EXCEPTION_UNKNOWN_ERROR_CLASS

The raise_error() function was used to raise an unknown error class: <errorClass>

XX000 # AMBIGUOUS_RESOLVER_EXTENSION

The single-pass analyzer cannot process this query or command because the extension choice for <operator> is ambiguous: <extensions>.

XX000 # HYBRID_ANALYZER_EXCEPTION

An failure occurred when attempting to resolve a query or command with both the legacy fixed-point analyzer as well as the single-pass resolver.

# FIXED_POINT_FAILED_SINGLE_PASS_SUCCEEDED

Fixed-point resolution failed, but single-pass resolution succeeded. Single-pass analyzer output: <singlePassOutput>

# LOGICAL_PLAN_COMPARISON_MISMATCH

Outputs of fixed-point and single-pass analyzers do not match. Fixed-point analyzer output: <fixedPointOutput> Single-pass analyzer output: <singlePassOutput>

# OUTPUT_SCHEMA_COMPARISON_MISMATCH

Output schemas of fixed-point and single-pass analyzers do not match. Fixed-point analyzer output schema: <fixedPointOutputSchema> Single-pass analyzer output schema: <singlePassOutputSchema>

XX000 # MALFORMED_PROTOBUF_MESSAGE

Malformed Protobuf messages are detected in message deserialization. Parse Mode: <failFastMode>. To process malformed protobuf message as null result, try setting the option 'mode' as 'PERMISSIVE'.

XX000 # MISSING_ATTRIBUTES

Resolved attribute(s) <missingAttributes> missing from <input> in operator <operator>.

# RESOLVED_ATTRIBUTE_APPEAR_IN_OPERATION

Attribute(s) with the same name appear in the operation: <operation>. Please check if the right attribute(s) are used.

# RESOLVED_ATTRIBUTE_MISSING_FROM_INPUT
XX000 # STATE_STORE_KEY_ROW_FORMAT_VALIDATION_FAILURE

The streaming query failed to validate written state for key row. The following reasons may cause this: 1. An old Spark version wrote the checkpoint that is incompatible with the current one 2. Corrupt checkpoint files 3. The query changed in an incompatible way between restarts For the first case, use a new checkpoint directory or use the original Spark version to process the streaming state. Retrieved error_message=<errorMsg>

XX000 # STATE_STORE_VALUE_ROW_FORMAT_VALIDATION_FAILURE

The streaming query failed to validate written state for value row. The following reasons may cause this: 1. An old Spark version wrote the checkpoint that is incompatible with the current one 2. Corrupt checkpoint files 3. The query changed in an incompatible way between restarts For the first case, use a new checkpoint directory or use the original Spark version to process the streaming state. Retrieved error_message=<errorMsg>

XXKD0 # PLAN_VALIDATION_FAILED_RULE_EXECUTOR

The input plan of <ruleExecutor> is invalid: <reason>

XXKD0 # PLAN_VALIDATION_FAILED_RULE_IN_BATCH

Rule <rule> in batch <batch> generated an invalid plan: <reason>

XXKDA # SPARK_JOB_CANCELLED

Job <jobId> cancelled <reason>

XXKST # STATE_STORE_KEY_SCHEMA_NOT_COMPATIBLE

Provided key schema does not match existing state key schema. Please check number and type of fields. Existing key_schema=<storedKeySchema> and new key_schema=<newKeySchema>. If you want to force running the query without schema validation, please set spark.sql.streaming.stateStore.stateSchemaCheck to false. However, please note that running the query with incompatible schema could cause non-deterministic behavior.

XXKST # STATE_STORE_UNSUPPORTED_OPERATION

<operationType> operation not supported with <entity>

XXKST # STATE_STORE_UNSUPPORTED_OPERATION_BINARY_INEQUALITY

Binary inequality column is not supported with state store. Provided schema: <schema>.

XXKST # STATE_STORE_VALUE_SCHEMA_NOT_COMPATIBLE

Provided value schema does not match existing state value schema. Please check number and type of fields. Existing value_schema=<storedValueSchema> and new value_schema=<newValueSchema>. If you want to force running the query without schema validation, please set spark.sql.streaming.stateStore.stateSchemaCheck to false. However, please note that running the query with incompatible schema could cause non-deterministic behavior.

XXKST # STDS_INTERNAL_ERROR

Internal error: <message> Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.

XXKST # STREAMING_PYTHON_RUNNER_INITIALIZATION_FAILURE

Streaming Runner initialization failed, returned <resFromPython>. Cause: <msg>

XXKST # STREAM_FAILED

Query [id = <id>, runId = <runId>] terminated with exception: <message>

XXKUC # INSUFFICIENT_TABLE_PROPERTY

Can't find table property:

# MISSING_KEY

<key>.

# MISSING_KEY_PART

<key>, <totalAmountOfParts> parts are expected.