Deprecated API
Contents
-
Deprecated Enum ClassesEnum ClassDescriptionProvided for compatibility. Use
JobCounterinstead.Provided for compatibility. UseTaskCounterinstead.
-
Deprecated Exceptions
-
Deprecated FieldsFieldDescriptionPlease use
CommonConfigurationKeysPublic.HADOOP_TAGS_CUSTOMinstead See https://issues.apache.org/jira/browse/HADOOP-15474Please useCommonConfigurationKeysPublic.HADOOP_TAGS_SYSTEMinstead See https://issues.apache.org/jira/browse/HADOOP-15474Moved to mapreduce, see mapreduce.task.io.sort.factor in mapred-default.xml See https://issues.apache.org/jira/browse/HADOOP-6801 ForSequenceFile.Sortercontrol instead, seeCommonConfigurationKeysPublic.SEQ_IO_SORT_FACTOR_KEY.Moved to mapreduce, see mapreduce.task.io.sort.mb in mapred-default.xml See https://issues.apache.org/jira/browse/HADOOP-6801 ForSequenceFile.Sortercontrol instead, seeCommonConfigurationKeysPublic.SEQ_IO_SORT_MB_KEY.use theOptions.OpenFileOptionsvalue in code which only needs to be compiled against newer hadoop releases.use theOptions.OpenFileOptionsvalue in code which only needs to be compiled against newer hadoop releases.use theOptions.OpenFileOptionsvalue in code which only needs to be compiled against newer hadoop releases.no longer supportedno longer supportedConfiguration key to set the maximum virtual memory available to the map tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.Configuration key to set the maximum virtual memory available to the reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.UseJobConf.MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTYandJobConf.MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTYConfiguration key to set the maximum virtual memory available to the child map and reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.use the correctly spelled constant.use one of the exception-raising getter methods, specificallyShell.getWinUtilsPath()orShell.getWinUtilsFile()This property should never be set tofalse.Eventually, we want to default to the curator-based implementation and remove theActiveStandbyElectorbased implementation. We should remove this config then.TheYarnConfiguration.AUTO_FAILOVER_EMBEDDEDproperty is deprecated.This default value is ignored and will be removed in a future release. The default value ofyarn.resourcemanager.state-store.max-completed-applicationsis the value ofYarnConfiguration.RM_MAX_COMPLETED_APPLICATIONS.This field is deprecated forYarnConfiguration.YARN_HTTP_WEBAPP_SCHEDULER_PAGE
-
Deprecated MethodsMethodDescriptionInternal to MapReduce framework. Use DistributedCacheManager instead.use
AbstractFileSystem.getServerDefaults(Path)insteadAPI only for 0.20-appendUse getFileStatus() insteaduseAdlFileSystem.getDefaultBlockSize(Path)insteadUse getFileStatus() insteadUse thePBHelperand protobuf serialization directly.Use thePBHelperand protobuf serialization directly.UseFileSystem.delete(Path, boolean)instead.UseFileSystem.getFileStatus(Path)insteaduseFileSystem.getDefaultBlockSize(Path)insteaduseFileSystem.getDefaultReplication(Path)insteadUseFileSystem.getFileStatus(Path)instead.callFileSystem.getUri()instead.callFileSystem.get(URI, Configuration)instead.UseFileSystem.getFileStatus(Path)insteaduseFileSystem.getServerDefaults(Path)insteadUseFileSystem.getFileStatus(Path)insteadUseFileSystem.getFileStatus(Path)insteaduseFSBuilder.mustDouble(String, double)to set floating point.useFSBuilder.optLong(String, long)where possible.usePath.makeQualified(URI, Path)Get acl bit from theFileStatusobject.Get encryption bit from theFileStatusobject.Get ec bit from theFileStatusobject.UseTrashPolicy.getInstance(Configuration, FileSystem)instead.UseTrashPolicy.initialize(Configuration, FileSystem)instead.org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockIdCommandProto.Action.valueOf(int) org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.getAttributes(String, INodeAttributes) UseBytesWritable.getBytes()instead.UseBytesWritable.getLength()instead.use ReflectionUtils.cloneInto instead.useClient.getRpcTimeout(Configuration)insteadorg.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcErrorCodeProto.valueOf(int) UseServer.call(RPC.RpcKind, String, Writable, long)insteaduseCounters.findCounter(String, String)insteaduseCounters.Group.findCounter(String)insteaduseAbstractCounters.countCounters()insteadApplications should rather useJobClient.getJob(JobID).Applications should rather useJobClient.getMapTaskReports(JobID)Applications should rather useJobClient.getReduceTaskReports(JobID)this variable is deprecated and nolonger in use.setAssignedJobID should not be called. JOBID is set by the framework.use getJobID insteadUseOutputCommitter.isRecoverySupported(JobContext)instead.This method is deprecated and will be removed. Applications should rather useRunningJob.getID().Applications should rather useRunningJob.killTask(TaskAttemptID, boolean)useTaskCompletionEvent.getTaskAttemptId()instead.useTaskCompletionEvent.setTaskAttemptId(TaskAttemptID)instead.useTaskCompletionEvent.setTaskAttemptId(TaskAttemptID)instead.UseCluster.getAllJobStatuses()instead.(and no-op by default)the array returned only includes the items the were downloaded. There is no way to map this to what is returned byJobContext.getCacheArchives().the array returned only includes the items the were downloaded. There is no way to map this to what is returned byJobContext.getCacheFiles().UseOutputCommitter.isRecoverySupported(JobContext)instead.org.apache.hadoop.mapreduce.security.TokenCache.getDelegationToken(Credentials, String) UseCredentials.getToken(org.apache.hadoop.io.Text)instead, this method is included for compatibility against Hadoop-1org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, Configuration) UseCredentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration)instead, this method is included for compatibility against Hadoop-1.org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, JobConf) UseCredentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration)instead, this method is included for compatibility against Hadoop-1.UseUserGroupInformation.getGroupsSet()instead.to be removed with 3.4.0. UseServiceOperations.stopQuietly(Logger, Service)instead.to be removed with 3.4.0. UseReflectionUtils.logThreadInfo(Logger, String, long)instead.This call isn't needed any more: please remove uses of it.UseGetClusterNodeLabelsResponse.getNodeLabelList()instead.UseGetClusterNodeLabelsResponse.newInstance(List)instead.UseGetClusterNodeLabelsResponse.setNodeLabelList(List)instead.org.apache.hadoop.yarn.api.records.ContainerId.newInstance(ApplicationAttemptId, int)
-
Deprecated ConstructorsConstructorDescriptionorg.apache.hadoop.yarn.client.api.async.AMRMClientAsync
(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler) UseNMClientAsync(String, AbstractCallbackHandler)instead.org.apache.hadoop.yarn.client.api.async.NMClientAsync(String, NMClient, NMClientAsync.CallbackHandler) UseNMClientAsync(AbstractCallbackHandler)instead.Use one of the other constructors instead.
-
Deprecated Enum ConstantsEnum ConstantDescriptionSent by Resource Manager when it is out of sync with the AM and wants the AM get back in sync. Note: Instead of sending this command,
ApplicationMasterNotRegisteredExceptionwill be thrown when ApplicationMaster is out of sync with ResourceManager and ApplicationMaster is expected to re-register with RM by callingApplicationMasterProtocol.registerApplicationMaster(RegisterApplicationMasterRequest)Sent by Resource Manager when it wants the AM to shutdown. Note: This command was earlier sent by ResourceManager to instruct AM to shutdown if RM had restarted. NowApplicationAttemptNotFoundExceptionwill be thrown in case that RM has restarted and AM is supposed to handle this exception by shutting down itself.