Skip navigation links
  • Overview
  • Package
  • Class
  • Use
  • Tree
  • Deprecated
  • Index
  • Help

Deprecated API

Contents

  • Interfaces
  • Classes
  • Enum Classes
  • Exceptions
  • Fields
  • Methods
  • Constructors
  • Enum Constants
  • Deprecated Interfaces
    Interface
    Description
    org.apache.hadoop.io.Closeable
    use java.io.Closeable
    org.apache.hadoop.record.Index
    Replaced by Avro.
    org.apache.hadoop.record.RecordInput
    Replaced by Avro.
    org.apache.hadoop.record.RecordOutput
    Replaced by Avro.
  • Deprecated Classes
    Class
    Description
    org.apache.hadoop.filecache.DistributedCache
    org.apache.hadoop.fs.azure.NativeAzureFileSystem
    org.apache.hadoop.fs.s3a.select.SelectConstants
    org.apache.hadoop.record.BinaryRecordInput
    Replaced by Avro.
    org.apache.hadoop.record.BinaryRecordOutput
    Replaced by Avro.
    org.apache.hadoop.record.Buffer
    Replaced by Avro.
    org.apache.hadoop.record.CsvRecordOutput
    Replaced by Avro.
    org.apache.hadoop.record.Record
    Replaced by Avro.
    org.apache.hadoop.record.RecordComparator
    Replaced by Avro.
    org.apache.hadoop.record.Utils
    Replaced by Avro.
    org.apache.hadoop.yarn.util.ApplicationClassLoader
  • Deprecated Enum Classes
    Enum Class
    Description
    org.apache.hadoop.fs.StreamCapabilities.StreamCapability
    org.apache.hadoop.mapred.FileInputFormat.Counter
    org.apache.hadoop.mapred.FileOutputFormat.Counter
    org.apache.hadoop.mapred.JobInProgress.Counter
    Provided for compatibility. Use JobCounter instead.
    org.apache.hadoop.mapred.Task.Counter
    Provided for compatibility. Use TaskCounter instead.
    org.apache.hadoop.mapreduce.lib.input.FileInputFormat.Counter
    org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.Counter
  • Deprecated Exceptions
    Exceptions
    Description
    org.apache.hadoop.yarn.state.InvalidStateTransitonException
    Use InvalidStateTransitionException instead.
  • Deprecated Fields
    Field
    Description
    org.apache.hadoop.filecache.DistributedCache.CACHE_ARCHIVES
    org.apache.hadoop.filecache.DistributedCache.CACHE_ARCHIVES_SIZES
    org.apache.hadoop.filecache.DistributedCache.CACHE_ARCHIVES_TIMESTAMPS
    org.apache.hadoop.filecache.DistributedCache.CACHE_FILES
    org.apache.hadoop.filecache.DistributedCache.CACHE_FILES_SIZES
    org.apache.hadoop.filecache.DistributedCache.CACHE_FILES_TIMESTAMPS
    org.apache.hadoop.filecache.DistributedCache.CACHE_LOCALARCHIVES
    org.apache.hadoop.filecache.DistributedCache.CACHE_LOCALFILES
    org.apache.hadoop.filecache.DistributedCache.CACHE_SYMLINK
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_CUSTOM_TAGS
    Please use CommonConfigurationKeysPublic.HADOOP_TAGS_CUSTOM instead See https://issues.apache.org/jira/browse/HADOOP-15474
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS
    use CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY instead.
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT
    use CommonConfigurationKeysPublic.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT instead.
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SYSTEM_TAGS
    Please use CommonConfigurationKeysPublic.HADOOP_TAGS_SYSTEM instead See https://issues.apache.org/jira/browse/HADOOP-15474
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_SORT_FACTOR_KEY
    Moved to mapreduce, see mapreduce.task.io.sort.factor in mapred-default.xml See https://issues.apache.org/jira/browse/HADOOP-6801 For SequenceFile.Sorter control instead, see CommonConfigurationKeysPublic.SEQ_IO_SORT_FACTOR_KEY.
    org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_SORT_MB_KEY
    Moved to mapreduce, see mapreduce.task.io.sort.mb in mapred-default.xml See https://issues.apache.org/jira/browse/HADOOP-6801 For SequenceFile.Sorter control instead, see CommonConfigurationKeysPublic.SEQ_IO_SORT_MB_KEY.
    org.apache.hadoop.fs.s3a.commit.CommitConstants.FS_S3A_COMMITTER_STAGING_ABORT_PENDING_UPLOADS
    org.apache.hadoop.fs.s3a.commit.CommitConstants.STORE_CAPABILITY_MAGIC_COMMITTER_OLD
    org.apache.hadoop.fs.s3a.commit.CommitConstants.STREAM_CAPABILITY_MAGIC_OUTPUT_OLD
    org.apache.hadoop.fs.s3a.Constants.AUTHORITATIVE_PATH
    org.apache.hadoop.fs.s3a.Constants.AWS_SERVICE_IDENTIFIER_DDB
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_AUTHORITATIVE_PATH
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_CONNECTION_TTL
    use Constants.DEFAULT_CONNECTION_TTL_DURATION
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_DIRECTORY_MARKER_POLICY
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_ESTABLISH_TIMEOUT
    use Constants.DEFAULT_ESTABLISH_TIMEOUT_DURATION
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_FAST_UPLOAD
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_KEEPALIVE_TIME
    use Constants.DEFAULT_KEEPALIVE_TIME_DURATION
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_METADATASTORE_AUTHORITATIVE
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_METADATASTORE_METADATA_TTL
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_REQUEST_TIMEOUT
    use Constants.DEFAULT_REQUEST_TIMEOUT_DURATION
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_S3GUARD_DISABLED_WARN_LEVEL
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_S3GUARD_METASTORE_LOCAL_ENTRY_TTL
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_S3GUARD_METASTORE_LOCAL_MAX_RECORDS
    org.apache.hadoop.fs.s3a.Constants.DEFAULT_SOCKET_TIMEOUT
    use Constants.DEFAULT_SOCKET_TIMEOUT_DURATION
    org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY
    org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_AUTHORITATIVE
    org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_DELETE
    org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_KEEP
    org.apache.hadoop.fs.s3a.Constants.FAIL_INJECT_INCONSISTENCY_KEY
    org.apache.hadoop.fs.s3a.Constants.FAIL_INJECT_INCONSISTENCY_MSEC
    org.apache.hadoop.fs.s3a.Constants.FAIL_INJECT_INCONSISTENCY_PROBABILITY
    org.apache.hadoop.fs.s3a.Constants.FAIL_ON_METADATA_WRITE_ERROR
    org.apache.hadoop.fs.s3a.Constants.FAIL_ON_METADATA_WRITE_ERROR_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.FAST_BUFFER_SIZE
    org.apache.hadoop.fs.s3a.Constants.FAST_UPLOAD
    org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_DEFAULT
    use the Options.OpenFileOptions value in code which only needs to be compiled against newer hadoop releases.
    org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM
    use the Options.OpenFileOptions value in code which only needs to be compiled against newer hadoop releases.
    org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL
    use the Options.OpenFileOptions value in code which only needs to be compiled against newer hadoop releases.
    org.apache.hadoop.fs.s3a.Constants.METADATASTORE_AUTHORITATIVE
    no longer supported
    org.apache.hadoop.fs.s3a.Constants.METADATASTORE_METADATA_TTL
    no longer supported
    org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY
    org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_CLI_PRUNE_AGE
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_CONSISTENCY_RETRY_INTERVAL
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_CONSISTENCY_RETRY_INTERVAL_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_CONSISTENCY_RETRY_LIMIT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_CONSISTENCY_RETRY_LIMIT_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_BACKGROUND_SLEEP_MSEC_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_BACKGROUND_SLEEP_MSEC_KEY
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_BATCH_WRITE_REQUEST_LIMIT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_MAX_RETRIES
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_MAX_RETRIES_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_REGION_KEY
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_CAPACITY_READ_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_CAPACITY_READ_KEY
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_CAPACITY_WRITE_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_CAPACITY_WRITE_KEY
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_CREATE_KEY
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_NAME_KEY
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_SSE_CMK
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_SSE_ENABLED
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_TABLE_TAG
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_THROTTLE_RETRY_INTERVAL
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DDB_THROTTLE_RETRY_INTERVAL_DEFAULT
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_DISABLED_WARN_LEVEL
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_DYNAMO
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_LOCAL
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_LOCAL_ENTRY_TTL
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_LOCAL_MAX_RECORDS
    org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL
    org.apache.hadoop.fs.s3a.Constants.S3N_FOLDER_SUFFIX
    org.apache.hadoop.fs.s3a.Constants.SERVER_SIDE_ENCRYPTION_AES256
    org.apache.hadoop.fs.s3a.Constants.SERVER_SIDE_ENCRYPTION_ALGORITHM
    org.apache.hadoop.fs.s3a.Constants.SERVER_SIDE_ENCRYPTION_KEY
    org.apache.hadoop.fs.s3a.Constants.SIGNING_ALGORITHM_DDB
    org.apache.hadoop.fs.s3a.Constants.STORE_CAPABILITY_DIRECTORY_MARKER_MULTIPART_UPLOAD_ENABLED
    org.apache.hadoop.fs.s3a.Constants.STORE_CAPABILITY_DIRECTORY_MARKER_POLICY_AUTHORITATIVE
    org.apache.hadoop.fs.s3a.Constants.STORE_CAPABILITY_DIRECTORY_MARKER_POLICY_DELETE
    org.apache.hadoop.fs.StreamCapabilities.HFLUSH
    org.apache.hadoop.mapred.JobConf.DEFAULT_MAPREDUCE_RECOVER_JOB
    org.apache.hadoop.mapred.JobConf.DISABLED_MEMORY_LIMIT
    org.apache.hadoop.mapred.JobConf.MAPRED_JOB_MAP_MEMORY_MB_PROPERTY
    org.apache.hadoop.mapred.JobConf.MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY
    org.apache.hadoop.mapred.JobConf.MAPRED_MAP_TASK_ULIMIT
    Configuration key to set the maximum virtual memory available to the map tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.
    org.apache.hadoop.mapred.JobConf.MAPRED_REDUCE_TASK_ULIMIT
    Configuration key to set the maximum virtual memory available to the reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.
    org.apache.hadoop.mapred.JobConf.MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY
     
    org.apache.hadoop.mapred.JobConf.MAPRED_TASK_ENV
    Use JobConf.MAPRED_MAP_TASK_ENV or JobConf.MAPRED_REDUCE_TASK_ENV
    org.apache.hadoop.mapred.JobConf.MAPRED_TASK_JAVA_OPTS
    Use JobConf.MAPRED_MAP_TASK_JAVA_OPTS or JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS
    org.apache.hadoop.mapred.JobConf.MAPRED_TASK_MAXPMEM_PROPERTY
     
    org.apache.hadoop.mapred.JobConf.MAPRED_TASK_MAXVMEM_PROPERTY
    Use JobConf.MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY and JobConf.MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY
    org.apache.hadoop.mapred.JobConf.MAPRED_TASK_ULIMIT
    Configuration key to set the maximum virtual memory available to the child map and reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.
    org.apache.hadoop.mapred.JobConf.MAPREDUCE_RECOVER_JOB
    org.apache.hadoop.mapred.JobConf.UPPER_LIMIT_ON_TASK_VMEM_PROPERTY
     
    org.apache.hadoop.mapred.JobConf.WORKFLOW_ADJACENCY_PREFIX_PATTERN
    org.apache.hadoop.mapred.JobConf.WORKFLOW_ADJACENCY_PREFIX_STRING
    org.apache.hadoop.mapred.JobConf.WORKFLOW_ID
    org.apache.hadoop.mapred.JobConf.WORKFLOW_NAME
    org.apache.hadoop.mapred.JobConf.WORKFLOW_NODE_NAME
    org.apache.hadoop.mapred.JobConf.WORKFLOW_TAGS
    org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper.DATA_FIELD_SEPERATOR
    Use FieldSelectionHelper.DATA_FIELD_SEPARATOR
    org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader.KEY_VALUE_SEPERATOR
    Use KeyValueLineRecordReader.KEY_VALUE_SEPARATOR
    org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.TEMP_DIR_NAME
    org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.SEPERATOR
    Use TextOutputFormat.SEPARATOR
    org.apache.hadoop.util.Shell.WINDOWS_MAX_SHELL_LENGHT
    use the correctly spelled constant.
    org.apache.hadoop.util.Shell.WINUTILS
    use one of the exception-raising getter methods, specifically Shell.getWinUtilsPath() or Shell.getWinUtilsFile()
    org.apache.hadoop.yarn.conf.YarnConfiguration.AUTO_FAILOVER_EMBEDDED
    This property should never be set to false.
    org.apache.hadoop.yarn.conf.YarnConfiguration.CURATOR_LEADER_ELECTOR
    Eventually, we want to default to the curator-based implementation and remove the ActiveStandbyElector based implementation. We should remove this config then.
    org.apache.hadoop.yarn.conf.YarnConfiguration.DEFAULT_AUTO_FAILOVER_EMBEDDED
    The YarnConfiguration.AUTO_FAILOVER_EMBEDDED property is deprecated.
    org.apache.hadoop.yarn.conf.YarnConfiguration.DEFAULT_NM_CONTAINER_MON_INTERVAL_MS
    org.apache.hadoop.yarn.conf.YarnConfiguration.DEFAULT_NM_DOCKER_STOP_GRACE_PERIOD
    org.apache.hadoop.yarn.conf.YarnConfiguration.DEFAULT_RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS
    This default value is ignored and will be removed in a future release. The default value of yarn.resourcemanager.state-store.max-completed-applications is the value of YarnConfiguration.RM_MAX_COMPLETED_APPLICATIONS.
    org.apache.hadoop.yarn.conf.YarnConfiguration.DISPLAY_APPS_FOR_LOGGED_IN_USER
    org.apache.hadoop.yarn.conf.YarnConfiguration.HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE
    This field is deprecated for YarnConfiguration.YARN_HTTP_WEBAPP_SCHEDULER_PAGE
    org.apache.hadoop.yarn.conf.YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD
    use YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS
    org.apache.hadoop.yarn.conf.YarnConfiguration.YARN_CLIENT_APP_SUBMISSION_POLL_INTERVAL_MS
  • Deprecated Methods
    Method
    Description
    csi.v0.Csi.ControllerServiceCapability.RPC.Type.valueOf(int)
    csi.v0.Csi.ControllerServiceCapability.TypeCase.valueOf(int)
    csi.v0.Csi.NodeServiceCapability.RPC.Type.valueOf(int)
    csi.v0.Csi.NodeServiceCapability.TypeCase.valueOf(int)
    csi.v0.Csi.PluginCapability.Service.Type.valueOf(int)
    csi.v0.Csi.PluginCapability.TypeCase.valueOf(int)
    csi.v0.Csi.SnapshotStatus.Type.valueOf(int)
    csi.v0.Csi.VolumeCapability.AccessMode.Mode.valueOf(int)
    csi.v0.Csi.VolumeCapability.AccessTypeCase.valueOf(int)
    csi.v0.Csi.VolumeContentSource.TypeCase.valueOf(int)
    org.apache.hadoop.conf.Configuration.addDeprecation(String, String[])
    use Configuration.addDeprecation(String key, String newKey) instead
    org.apache.hadoop.conf.Configuration.addDeprecation(String, String[], String)
    use Configuration.addDeprecation(String key, String newKey, String customMessage) instead
    org.apache.hadoop.filecache.DistributedCache.addLocalArchives(Configuration, String)
    org.apache.hadoop.filecache.DistributedCache.addLocalFiles(Configuration, String)
    org.apache.hadoop.filecache.DistributedCache.createAllSymlink(Configuration, File, File)
    Internal to MapReduce framework. Use DistributedCacheManager instead.
    org.apache.hadoop.filecache.DistributedCache.getFileStatus(Configuration, URI)
    org.apache.hadoop.filecache.DistributedCache.getTimestamp(Configuration, URI)
    org.apache.hadoop.filecache.DistributedCache.setArchiveTimestamps(Configuration, String)
    org.apache.hadoop.filecache.DistributedCache.setFileTimestamps(Configuration, String)
    org.apache.hadoop.filecache.DistributedCache.setLocalArchives(Configuration, String)
    org.apache.hadoop.filecache.DistributedCache.setLocalFiles(Configuration, String)
    org.apache.hadoop.fs.AbstractFileSystem.getServerDefaults()
    use AbstractFileSystem.getServerDefaults(Path) instead
    org.apache.hadoop.fs.adl.AdlFileSystem.createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable)
    API only for 0.20-append
    org.apache.hadoop.fs.adl.AdlFileSystem.getBlockSize(Path)
    Use getFileStatus() instead
    org.apache.hadoop.fs.adl.AdlFileSystem.getDefaultBlockSize()
    use AdlFileSystem.getDefaultBlockSize(Path) instead
    org.apache.hadoop.fs.adl.AdlFileSystem.getReplication(Path)
    Use getFileStatus() instead
    org.apache.hadoop.fs.adl.AdlFileSystem.rename(Path, Path, Options.Rename...)
    org.apache.hadoop.fs.FileStatus.isDir()
    Use FileStatus.isFile(), FileStatus.isDirectory(), and FileStatus.isSymlink() instead.
    org.apache.hadoop.fs.FileStatus.readFields(DataInput)
    Use the PBHelper and protobuf serialization directly.
    org.apache.hadoop.fs.FileStatus.write(DataOutput)
    Use the PBHelper and protobuf serialization directly.
    org.apache.hadoop.fs.FileSystem.delete(Path)
    Use FileSystem.delete(Path, boolean) instead.
    org.apache.hadoop.fs.FileSystem.getAllStatistics()
    use FileSystem.getGlobalStorageStatistics()
    org.apache.hadoop.fs.FileSystem.getBlockSize(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.getDefaultBlockSize()
    use FileSystem.getDefaultBlockSize(Path) instead
    org.apache.hadoop.fs.FileSystem.getDefaultReplication()
    use FileSystem.getDefaultReplication(Path) instead
    org.apache.hadoop.fs.FileSystem.getLength(Path)
    Use FileSystem.getFileStatus(Path) instead.
    org.apache.hadoop.fs.FileSystem.getName()
    call FileSystem.getUri() instead.
    org.apache.hadoop.fs.FileSystem.getNamed(String, Configuration)
    call FileSystem.get(URI, Configuration) instead.
    org.apache.hadoop.fs.FileSystem.getReplication(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.getServerDefaults()
    use FileSystem.getServerDefaults(Path) instead
    org.apache.hadoop.fs.FileSystem.getStatistics()
    use FileSystem.getGlobalStorageStatistics()
    org.apache.hadoop.fs.FileSystem.getStatistics(String, Class<? extends FileSystem>)
    use FileSystem.getGlobalStorageStatistics()
    org.apache.hadoop.fs.FileSystem.isDirectory(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.isFile(Path)
    Use FileSystem.getFileStatus(Path) instead
    org.apache.hadoop.fs.FileSystem.primitiveCreate(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable, Options.ChecksumOpt)
    org.apache.hadoop.fs.FileSystem.primitiveMkdir(Path, FsPermission)
    org.apache.hadoop.fs.FileSystem.primitiveMkdir(Path, FsPermission, boolean)
    org.apache.hadoop.fs.FileSystem.rename(Path, Path, Options.Rename...)
    org.apache.hadoop.fs.FileUtil.fullyDelete(FileSystem, Path)
    Use FileSystem.delete(Path, boolean)
    org.apache.hadoop.fs.FSBuilder.must(String, double)
    org.apache.hadoop.fs.FSBuilder.must(String, float)
    use FSBuilder.mustDouble(String, double) to set floating point.
    org.apache.hadoop.fs.FSBuilder.must(String, long)
    org.apache.hadoop.fs.FSBuilder.opt(String, double)
    use FSBuilder.optDouble(String, double)
    org.apache.hadoop.fs.FSBuilder.opt(String, float)
    use FSBuilder.optDouble(String, double)
    org.apache.hadoop.fs.FSBuilder.opt(String, long)
    use FSBuilder.optLong(String, long) where possible.
    org.apache.hadoop.fs.FSProtos.FileStatusProto.FileType.valueOf(int)
    org.apache.hadoop.fs.FSProtos.FileStatusProto.Flags.valueOf(int)
    org.apache.hadoop.fs.Path.makeQualified(FileSystem)
    use Path.makeQualified(URI, Path)
    org.apache.hadoop.fs.permission.FsPermission.getAclBit()
    Get acl bit from the FileStatus object.
    org.apache.hadoop.fs.permission.FsPermission.getEncryptedBit()
    Get encryption bit from the FileStatus object.
    org.apache.hadoop.fs.permission.FsPermission.getErasureCodedBit()
    Get ec bit from the FileStatus object.
    org.apache.hadoop.fs.permission.FsPermission.readFields(DataInput)
    org.apache.hadoop.fs.permission.FsPermission.toExtendedShort()
    org.apache.hadoop.fs.permission.FsPermission.write(DataOutput)
    org.apache.hadoop.fs.TrashPolicy.getInstance(Configuration, FileSystem, Path)
    Use TrashPolicy.getInstance(Configuration, FileSystem) instead.
    org.apache.hadoop.fs.TrashPolicy.initialize(Configuration, FileSystem, Path)
    Use TrashPolicy.initialize(Configuration, FileSystem) instead.
    org.apache.hadoop.fs.viewfs.ViewFs.getServerDefaults()
    org.apache.hadoop.ha.proto.HAServiceProtocolProtos.HARequestSource.valueOf(int)
    org.apache.hadoop.ha.proto.HAServiceProtocolProtos.HAServiceStateProto.valueOf(int)
    org.apache.hadoop.hdfs.client.HdfsAdmin.createEncryptionZone(Path, String)
    org.apache.hadoop.hdfs.client.HdfsAdmin.listOpenFiles()
    org.apache.hadoop.hdfs.client.HdfsAdmin.listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType>)
    org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.MountTableRecordProto.DestOrder.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.AclProtos.AclEntryProto.AclEntryScopeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.AclProtos.AclEntryProto.AclEntryTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.AclProtos.AclEntryProto.FsActionProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.AddBlockFlagProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.CacheFlagProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.CreateFlagProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.DatanodeReportTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.OpenFilesTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RollingUpgradeActionProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.SafeModeActionProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockCommandProto.Action.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockIdCommandProto.Action.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.DatanodeCommandProto.Type.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ErrorReportRequestProto.ErrorCode.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ReceivedDeletedBlockInfoProto.BlockStatus.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto.DataTransferEncryptorStatus.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto.BlockConstructionStage.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitFdResponse.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.Status.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.EncryptionZonesProtos.ReencryptActionProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.EncryptionZonesProtos.ReencryptionStateProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.AccessModeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockChecksumTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ChecksumTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.CipherSuiteProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.CryptoProtocolVersionProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto.AdminState.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeStorageProto.StorageState.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ErasureCodingPolicyState.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.HdfsFileStatusProto.FileType.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.HdfsFileStatusProto.Flags.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsServerProtos.NamenodeCommandProto.Type.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsServerProtos.NamenodeRegistrationProto.NamenodeRoleProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsServerProtos.NNHAStatusHeartbeatProto.State.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.HdfsServerProtos.ReplicaStateProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.InotifyProtos.EventType.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.InotifyProtos.INodeType.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.InotifyProtos.MetadataUpdateType.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.XAttrProtos.XAttrProto.XAttrNamespaceProto.valueOf(int)
    org.apache.hadoop.hdfs.protocol.proto.XAttrProtos.XAttrSetFlagProto.valueOf(int)
    org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INode.Type.valueOf(int)
    org.apache.hadoop.hdfs.server.namenode.FsImageProto.SnapshotDiffSection.DiffEntry.Type.valueOf(int)
    org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.getAttributes(String, INodeAttributes)
    org.apache.hadoop.io.BytesWritable.get()
    Use BytesWritable.getBytes() instead.
    org.apache.hadoop.io.BytesWritable.getSize()
    Use BytesWritable.getLength() instead.
    org.apache.hadoop.io.SequenceFile.createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, boolean, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata)
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.SequenceFile.createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable)
    Use SequenceFile.createWriter(Configuration, Writer.Option...) instead.
    org.apache.hadoop.io.WritableUtils.cloneInto(Writable, Writable)
    use ReflectionUtils.cloneInto instead.
    org.apache.hadoop.ipc.Client.getTimeout(Configuration)
    use Client.getRpcTimeout(Configuration) instead
    org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcKindProto.valueOf(int)
    org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto.OperationProto.valueOf(int)
    org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcErrorCodeProto.valueOf(int)
    org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto.valueOf(int)
    org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcSaslProto.SaslState.valueOf(int)
    org.apache.hadoop.ipc.Server.call(Writable, long)
    Use Server.call(RPC.RpcKind, String, Writable, long) instead
    org.apache.hadoop.mapred.ClusterStatus.getGraylistedTrackerNames()
    org.apache.hadoop.mapred.ClusterStatus.getGraylistedTrackers()
    org.apache.hadoop.mapred.ClusterStatus.getJobTrackerState()
    org.apache.hadoop.mapred.ClusterStatus.getMaxMemory()
    org.apache.hadoop.mapred.ClusterStatus.getUsedMemory()
    org.apache.hadoop.mapred.Counters.Counter.contentEquals(Counters.Counter)
     
    org.apache.hadoop.mapred.Counters.findCounter(String, int, String)
    use Counters.findCounter(String, String) instead
    org.apache.hadoop.mapred.Counters.Group.getCounter(int, String)
    use Counters.Group.findCounter(String) instead
    org.apache.hadoop.mapred.Counters.size()
    use AbstractCounters.countCounters() instead
    org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(JobContext)
    org.apache.hadoop.mapred.FileOutputCommitter.isRecoverySupported()
    org.apache.hadoop.mapred.JobClient.cancelDelegationToken(Token<DelegationTokenIdentifier>)
    Use Token.cancel(org.apache.hadoop.conf.Configuration) instead
    org.apache.hadoop.mapred.JobClient.getJob(String)
    Applications should rather use JobClient.getJob(JobID).
    org.apache.hadoop.mapred.JobClient.getMapTaskReports(String)
    Applications should rather use JobClient.getMapTaskReports(JobID)
    org.apache.hadoop.mapred.JobClient.getReduceTaskReports(String)
    Applications should rather use JobClient.getReduceTaskReports(JobID)
    org.apache.hadoop.mapred.JobClient.getTaskOutputFilter()
    org.apache.hadoop.mapred.JobClient.renewDelegationToken(Token<DelegationTokenIdentifier>)
    Use Token.renew(org.apache.hadoop.conf.Configuration) instead
    org.apache.hadoop.mapred.JobClient.setTaskOutputFilter(JobClient.TaskStatusFilter)
    org.apache.hadoop.mapred.JobConf.deleteLocalFiles()
    org.apache.hadoop.mapred.JobConf.getMaxPhysicalMemoryForTask()
    this variable is deprecated and nolonger in use.
    org.apache.hadoop.mapred.JobConf.getMaxVirtualMemoryForTask()
    Use JobConf.getMemoryForMapTask() and JobConf.getMemoryForReduceTask()
    org.apache.hadoop.mapred.JobConf.getSessionId()
    org.apache.hadoop.mapred.JobConf.setMaxPhysicalMemoryForTask(long)
    org.apache.hadoop.mapred.JobConf.setMaxVirtualMemoryForTask(long)
    Use JobConf.setMemoryForMapTask(long mem) and Use JobConf.setMemoryForReduceTask(long mem)
    org.apache.hadoop.mapred.JobConf.setSessionId(String)
    org.apache.hadoop.mapred.jobcontrol.Job.setAssignedJobID(JobID)
    setAssignedJobID should not be called. JOBID is set by the framework.
    org.apache.hadoop.mapred.jobcontrol.Job.setMapredJobID(String)
    org.apache.hadoop.mapred.jobcontrol.Job.setState(int)
    org.apache.hadoop.mapred.JobID.getJobIDsPattern(String, Integer)
    org.apache.hadoop.mapred.JobID.read(DataInput)
    org.apache.hadoop.mapred.JobQueueInfo.getQueueState()
    org.apache.hadoop.mapred.JobStatus.getJobId()
    use getJobID instead
    org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, List<PathFilter>)
    Use CombineFileInputFormat.createPool(List).
    org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, PathFilter...)
    Use CombineFileInputFormat.createPool(PathFilter...).
    org.apache.hadoop.mapred.lib.TotalOrderPartitioner.getPartitionFile(JobConf)
    Use TotalOrderPartitioner.getPartitionFile(Configuration) instead
    org.apache.hadoop.mapred.lib.TotalOrderPartitioner.setPartitionFile(JobConf, Path)
    Use TotalOrderPartitioner.setPartitionFile(Configuration, Path) instead
    org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
    Use OutputCommitter.commitJob(JobContext) or OutputCommitter.abortJob(JobContext, int) instead.
    org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
    Use OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext) or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State) instead.
    org.apache.hadoop.mapred.OutputCommitter.isRecoverySupported()
    Use OutputCommitter.isRecoverySupported(JobContext) instead.
    org.apache.hadoop.mapred.pipes.Submitter.submitJob(JobConf)
    Use Submitter.runJob(JobConf)
    org.apache.hadoop.mapred.RunningJob.getJobID()
    This method is deprecated and will be removed. Applications should rather use RunningJob.getID().
    org.apache.hadoop.mapred.RunningJob.killTask(String, boolean)
    Applications should rather use RunningJob.killTask(TaskAttemptID, boolean)
    org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer)
    org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, TaskType, Integer, Integer)
    org.apache.hadoop.mapred.TaskAttemptID.read(DataInput)
    org.apache.hadoop.mapred.TaskCompletionEvent.getTaskId()
    use TaskCompletionEvent.getTaskAttemptId() instead.
    org.apache.hadoop.mapred.TaskCompletionEvent.setTaskId(String)
    use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead.
    org.apache.hadoop.mapred.TaskCompletionEvent.setTaskID(TaskAttemptID)
    use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead.
    org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, Boolean, Integer)
    Use TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer)
    org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer)
    org.apache.hadoop.mapred.TaskID.read(DataInput)
    org.apache.hadoop.mapreduce.Cluster.cancelDelegationToken(Token<DelegationTokenIdentifier>)
    Use Token.cancel(org.apache.hadoop.conf.Configuration) instead
    org.apache.hadoop.mapreduce.Cluster.getAllJobs()
    Use Cluster.getAllJobStatuses() instead.
    org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Token<DelegationTokenIdentifier>)
    Use Token.renew(org.apache.hadoop.conf.Configuration) instead
    org.apache.hadoop.mapreduce.Counter.setDisplayName(String)
    (and no-op by default)
    org.apache.hadoop.mapreduce.Job.createSymlink()
    org.apache.hadoop.mapreduce.Job.getInstance(Cluster)
    Use Job.getInstance()
    org.apache.hadoop.mapreduce.Job.getInstance(Cluster, Configuration)
    Use Job.getInstance(Configuration)
    org.apache.hadoop.mapreduce.JobContext.getLocalCacheArchives()
    the array returned only includes the items the were downloaded. There is no way to map this to what is returned by JobContext.getCacheArchives().
    org.apache.hadoop.mapreduce.JobContext.getLocalCacheFiles()
    the array returned only includes the items the were downloaded. There is no way to map this to what is returned by JobContext.getCacheFiles().
    org.apache.hadoop.mapreduce.JobContext.getSymlink()
    org.apache.hadoop.mapreduce.lib.db.DBRecordReader.createValue()
     
    org.apache.hadoop.mapreduce.lib.db.DBRecordReader.getPos()
     
    org.apache.hadoop.mapreduce.lib.db.DBRecordReader.next(LongWritable, T)
    Use DBRecordReader.nextKeyValue()
    org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(JobContext)
    org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.isRecoverySupported()
    org.apache.hadoop.mapreduce.OutputCommitter.cleanupJob(JobContext)
    Use OutputCommitter.commitJob(JobContext) and OutputCommitter.abortJob(JobContext, JobStatus.State) instead.
    org.apache.hadoop.mapreduce.OutputCommitter.isRecoverySupported()
    Use OutputCommitter.isRecoverySupported(JobContext) instead.
    org.apache.hadoop.mapreduce.security.TokenCache.getDelegationToken(Credentials, String)
    Use Credentials.getToken(org.apache.hadoop.io.Text) instead, this method is included for compatibility against Hadoop-1
    org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, Configuration)
    Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead, this method is included for compatibility against Hadoop-1.
    org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, JobConf)
    Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead, this method is included for compatibility against Hadoop-1.
    org.apache.hadoop.mapreduce.TaskAttemptID.isMap()
    org.apache.hadoop.mapreduce.TaskID.isMap()
    org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.setUseQueryStringForDelegationToken(boolean)
    org.apache.hadoop.security.UserGroupInformation.getGroups()
    Use UserGroupInformation.getGroupsSet() instead.
    org.apache.hadoop.service.ServiceOperations.stopQuietly(Log, Service)
    to be removed with 3.4.0. Use ServiceOperations.stopQuietly(Logger, Service) instead.
    org.apache.hadoop.util.ReflectionUtils.cloneWritableInto(Writable, Writable)
    org.apache.hadoop.util.ReflectionUtils.logThreadInfo(Log, String, long)
    to be removed with 3.4.0. Use ReflectionUtils.logThreadInfo(Logger, String, long) instead.
    org.apache.hadoop.util.Shell.isJava7OrAbove()
    This call isn't needed any more: please remove uses of it.
    org.apache.hadoop.yarn.api.ContainerManagementProtocol.increaseContainersResource(IncreaseContainersResourceRequest)
    org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse.getNodeLabels()
    Use GetClusterNodeLabelsResponse.getNodeLabelList() instead.
    org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse.newInstance(Set<String>)
    Use GetClusterNodeLabelsResponse.newInstance(List) instead.
    org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse.setNodeLabels(Set<String>)
    Use GetClusterNodeLabelsResponse.setNodeLabelList(List) instead.
    org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext.getAMContainerResourceRequest()
    See ApplicationSubmissionContext.getAMContainerResourceRequests()
    org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext.setAMContainerResourceRequest(ResourceRequest)
    See ApplicationSubmissionContext.setAMContainerResourceRequests(List)
    org.apache.hadoop.yarn.api.records.ContainerId.getId()
    org.apache.hadoop.yarn.api.records.ContainerId.newInstance(ApplicationAttemptId, int)
    org.apache.hadoop.yarn.api.records.Resource.getMemory()
    org.apache.hadoop.yarn.api.records.Resource.setMemory(int)
    org.apache.hadoop.yarn.client.api.AMRMClient.requestContainerResourceChange(Container, Resource)
    use AMRMClient.requestContainerUpdate(Container, UpdateContainerRequest)
    org.apache.hadoop.yarn.client.api.async.AMRMClientAsync.createAMRMClientAsync(int, AMRMClientAsync.CallbackHandler)
    Use AMRMClientAsync.createAMRMClientAsync(int, AMRMClientAsync.AbstractCallbackHandler) instead.
    org.apache.hadoop.yarn.client.api.async.AMRMClientAsync.createAMRMClientAsync(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler)
    Use AMRMClientAsync.createAMRMClientAsync(AMRMClient, int, AMRMClientAsync.AbstractCallbackHandler) instead.
    org.apache.hadoop.yarn.client.api.async.AMRMClientAsync.requestContainerResourceChange(Container, Resource)
    use AMRMClientAsync.requestContainerUpdate(Container, UpdateContainerRequest)
    org.apache.hadoop.yarn.client.api.async.NMClientAsync.createNMClientAsync(NMClientAsync.CallbackHandler)
    Use NMClientAsync.createNMClientAsync(AbstractCallbackHandler) instead.
    org.apache.hadoop.yarn.client.api.async.NMClientAsync.increaseContainerResourceAsync(Container)
    org.apache.hadoop.yarn.client.api.NMClient.increaseContainerResource(Container)
    org.apache.hadoop.yarn.federation.proto.YarnServerFederationProtos.SubClusterStateProto.valueOf(int)
    org.apache.hadoop.yarn.util.ConverterUtils.getPathFromYarnURL(URL)
    org.apache.hadoop.yarn.util.ConverterUtils.getYarnUrlFromPath(Path)
    org.apache.hadoop.yarn.util.ConverterUtils.getYarnUrlFromURI(URI)
    org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(String)
    org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(String)
    org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(RecordFactory, String)
    org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(String)
    org.apache.hadoop.yarn.util.ConverterUtils.toNodeId(String)
    org.apache.hadoop.yarn.util.ConverterUtils.toString(ApplicationId)
    org.apache.hadoop.yarn.util.ConverterUtils.toString(ContainerId)
  • Deprecated Constructors
    Constructor
    Description
    org.apache.hadoop.fs.ContentSummary()
    org.apache.hadoop.fs.ContentSummary(long, long, long)
    org.apache.hadoop.fs.ContentSummary(long, long, long, long, long, long)
    org.apache.hadoop.fs.LocatedFileStatus(long, boolean, int, long, long, long, FsPermission, String, String, Path, Path, BlockLocation[])
    org.apache.hadoop.mapred.FileSplit(Path, long, long, JobConf)
     
    org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, float, int, JobPriority)
    org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, int)
    org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, int, JobPriority)
    org.apache.hadoop.mapred.JobStatus(JobID, float, float, int)
    org.apache.hadoop.mapred.TaskAttemptID(String, int, boolean, int, int)
    Use TaskAttemptID(String, int, TaskType, int, int).
    org.apache.hadoop.mapred.TaskID(String, int, boolean, int)
    Use TaskID(org.apache.hadoop.mapreduce.JobID, TaskType, int)
    org.apache.hadoop.mapred.TaskID(JobID, boolean, int)
    Use TaskID(String, int, TaskType, int)
    org.apache.hadoop.mapreduce.Job()
    Use Job.getInstance()
    org.apache.hadoop.mapreduce.Job(Configuration)
    Use Job.getInstance(Configuration)
    org.apache.hadoop.mapreduce.Job(Configuration, String)
    Use Job.getInstance(Configuration, String)
    org.apache.hadoop.mapreduce.TaskAttemptID(String, int, boolean, int, int)
    org.apache.hadoop.mapreduce.TaskID(String, int, boolean, int)
    org.apache.hadoop.mapreduce.TaskID(JobID, boolean, int)
    org.apache.hadoop.yarn.client.api.async.AMRMClientAsync(int, AMRMClientAsync.CallbackHandler)
    org.apache.hadoop.yarn.client.api.async.AMRMClientAsync(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler)
    org.apache.hadoop.yarn.client.api.async.NMClientAsync(String, NMClientAsync.CallbackHandler)
    Use NMClientAsync(String, AbstractCallbackHandler) instead.
    org.apache.hadoop.yarn.client.api.async.NMClientAsync(String, NMClient, NMClientAsync.CallbackHandler)
    org.apache.hadoop.yarn.client.api.async.NMClientAsync(NMClientAsync.CallbackHandler)
    Use NMClientAsync(AbstractCallbackHandler) instead.
    org.apache.hadoop.yarn.security.ContainerTokenIdentifier(ContainerId, String, String, Resource, long, int, long, Priority, long, LogAggregationContext)
    Use one of the other constructors instead.
    org.apache.hadoop.yarn.util.SystemClock()
  • Deprecated Enum Constants
    Enum Constant
    Description
    org.apache.hadoop.mapreduce.JobCounter.FALLOW_SLOTS_MILLIS_MAPS
    org.apache.hadoop.mapreduce.JobCounter.FALLOW_SLOTS_MILLIS_REDUCES
    org.apache.hadoop.mapreduce.JobCounter.SLOTS_MILLIS_MAPS
    org.apache.hadoop.mapreduce.JobCounter.SLOTS_MILLIS_REDUCES
    org.apache.hadoop.security.SaslRpcServer.AuthMethod.DIGEST
    org.apache.hadoop.yarn.api.records.AMCommand.AM_RESYNC
    Sent by Resource Manager when it is out of sync with the AM and wants the AM get back in sync. Note: Instead of sending this command, ApplicationMasterNotRegisteredException will be thrown when ApplicationMaster is out of sync with ResourceManager and ApplicationMaster is expected to re-register with RM by calling ApplicationMasterProtocol.registerApplicationMaster(RegisterApplicationMasterRequest)
    org.apache.hadoop.yarn.api.records.AMCommand.AM_SHUTDOWN
    Sent by Resource Manager when it wants the AM to shutdown. Note: This command was earlier sent by ResourceManager to instruct AM to shutdown if RM had restarted. Now ApplicationAttemptNotFoundException will be thrown in case that RM has restarted and AM is supposed to handle this exception by shutting down itself.

Copyright © 2026 Apache Software Foundation. All rights reserved.