Class JobConf

java.lang.Object
org.apache.hadoop.conf.Configuration
org.apache.hadoop.mapred.JobConf
All Implemented Interfaces:
Iterable<Map.Entry<String,String>>, Writable

@Public @Stable public class JobConf extends Configuration
A map/reduce job configuration.

JobConf is the primary interface for a user to describe a map-reduce job to the Hadoop framework for execution. The framework tries to faithfully execute the job as-is described by JobConf, however:

  1. Some configuration parameters might have been marked as final by administrators and hence cannot be altered.
  2. While some job parameters are straight-forward to set (e.g. setNumReduceTasks(int)), some parameters interact subtly with the rest of the framework and/or job-configuration and is relatively more complex for the user to control finely (e.g. setNumMapTasks(int)).

JobConf typically specifies the Mapper, combiner (if any), Partitioner, Reducer, InputFormat and OutputFormat implementations to be used etc.

Optionally JobConf is used to specify other advanced facets of the job such as Comparators to be used, files to be put in the DistributedCache, whether or not intermediate and/or job outputs are to be compressed (and how), debugability via user-provided scripts ( setMapDebugScript(String)/setReduceDebugScript(String)), for doing post-processing on task logs, task's stdout, stderr, syslog. and etc.

Here is an example on how to configure a job via JobConf:

     // Create a new JobConf
     JobConf job = new JobConf(new Configuration(), MyJob.class);
     
     // Specify various job-specific parameters     
     job.setJobName("myjob");
     
     FileInputFormat.setInputPaths(job, new Path("in"));
     FileOutputFormat.setOutputPath(job, new Path("out"));
     
     job.setMapperClass(MyJob.MyMapper.class);
     job.setCombinerClass(MyJob.MyReducer.class);
     job.setReducerClass(MyJob.MyReducer.class);
     
     job.setInputFormat(SequenceFileInputFormat.class);
     job.setOutputFormat(SequenceFileOutputFormat.class);
 
See Also:
  • Field Details

    • MAPRED_TASK_MAXVMEM_PROPERTY

      @Deprecated public static final String MAPRED_TASK_MAXVMEM_PROPERTY
      Deprecated.
      Use MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY and MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY
      See Also:
    • UPPER_LIMIT_ON_TASK_VMEM_PROPERTY

      @Deprecated public static final String UPPER_LIMIT_ON_TASK_VMEM_PROPERTY
      Deprecated.
      See Also:
    • MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY

      @Deprecated public static final String MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY
      Deprecated.
      See Also:
    • MAPRED_TASK_MAXPMEM_PROPERTY

      @Deprecated public static final String MAPRED_TASK_MAXPMEM_PROPERTY
      Deprecated.
      See Also:
    • DISABLED_MEMORY_LIMIT

      @Deprecated public static final long DISABLED_MEMORY_LIMIT
      Deprecated.
      A value which if set for memory related configuration options, indicates that the options are turned off. Deprecated because it makes no sense in the context of MR2.
      See Also:
    • MAPRED_LOCAL_DIR_PROPERTY

      public static final String MAPRED_LOCAL_DIR_PROPERTY
      Property name for the configuration property mapreduce.cluster.local.dir
      See Also:
    • DEFAULT_QUEUE_NAME

      public static final String DEFAULT_QUEUE_NAME
      Name of the queue to which jobs will be submitted, if no queue name is mentioned.
      See Also:
    • MAPRED_JOB_MAP_MEMORY_MB_PROPERTY

      @Deprecated public static final String MAPRED_JOB_MAP_MEMORY_MB_PROPERTY
      Deprecated.
      The variable is kept for M/R 1.x applications, while M/R 2.x applications should use MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY
      See Also:
    • MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY

      @Deprecated public static final String MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY
      Deprecated.
      The variable is kept for M/R 1.x applications, while M/R 2.x applications should use MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY
      See Also:
    • UNPACK_JAR_PATTERN_DEFAULT

      public static final Pattern UNPACK_JAR_PATTERN_DEFAULT
      Pattern for the default unpacking behavior for job jars
    • MAPRED_TASK_JAVA_OPTS

      @Deprecated public static final String MAPRED_TASK_JAVA_OPTS
      Configuration key to set the java command line options for the child map and reduce tasks. Java opts for the task tracker child processes. The following symbol, if present, will be interpolated: @taskid@. It is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc The configuration variable MAPRED_TASK_ENV can be used to pass other environment variables to the child processes.
      See Also:
    • MAPRED_MAP_TASK_JAVA_OPTS

      public static final String MAPRED_MAP_TASK_JAVA_OPTS
      Configuration key to set the java command line options for the map tasks. Java opts for the task tracker child map processes. The following symbol, if present, will be interpolated: @taskid@. It is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc The configuration variable MAPRED_MAP_TASK_ENV can be used to pass other environment variables to the map processes.
      See Also:
    • MAPRED_REDUCE_TASK_JAVA_OPTS

      public static final String MAPRED_REDUCE_TASK_JAVA_OPTS
      Configuration key to set the java command line options for the reduce tasks. Java opts for the task tracker child reduce processes. The following symbol, if present, will be interpolated: @taskid@. It is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc The configuration variable MAPRED_REDUCE_TASK_ENV can be used to pass process environment variables to the reduce processes.
      See Also:
    • DEFAULT_MAPRED_TASK_JAVA_OPTS

      public static final String DEFAULT_MAPRED_TASK_JAVA_OPTS
      See Also:
    • MAPRED_TASK_ULIMIT

      @Deprecated public static final String MAPRED_TASK_ULIMIT
      Deprecated.
      Configuration key to set the maximum virtual memory available to the child map and reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.
      See Also:
    • MAPRED_MAP_TASK_ULIMIT

      @Deprecated public static final String MAPRED_MAP_TASK_ULIMIT
      Deprecated.
      Configuration key to set the maximum virtual memory available to the map tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.
      See Also:
    • MAPRED_REDUCE_TASK_ULIMIT

      @Deprecated public static final String MAPRED_REDUCE_TASK_ULIMIT
      Deprecated.
      Configuration key to set the maximum virtual memory available to the reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect.
      See Also:
    • MAPRED_TASK_ENV

      @Deprecated public static final String MAPRED_TASK_ENV
      Configuration key to set the environment of the child map/reduce tasks. The format of the value is k1=v1,k2=v2. Further it can reference existing environment variables via $key on Linux or %key% on Windows. Example:
      • A=foo - This will set the env variable A to foo.
      See Also:
    • MAPRED_MAP_TASK_ENV

      public static final String MAPRED_MAP_TASK_ENV
      Configuration key to set the environment of the child map tasks. The format of the value is k1=v1,k2=v2. Further it can reference existing environment variables via $key on Linux or %key% on Windows. Example:
      • A=foo - This will set the env variable A to foo.
      You can also add environment variables individually by appending .VARNAME to this configuration key, where VARNAME is the name of the environment variable. Example:
      • mapreduce.map.env.VARNAME=value
      See Also:
    • MAPRED_REDUCE_TASK_ENV

      public static final String MAPRED_REDUCE_TASK_ENV
      Configuration key to set the environment of the child reduce tasks. The format of the value is k1=v1,k2=v2. Further it can reference existing environment variables via $key on Linux or %key% on Windows. Example:
      • A=foo - This will set the env variable A to foo.
      You can also add environment variables individually by appending .VARNAME to this configuration key, where VARNAME is the name of the environment variable. Example:
      • mapreduce.reduce.env.VARNAME=value
      See Also:
    • MAPRED_MAP_TASK_LOG_LEVEL

      public static final String MAPRED_MAP_TASK_LOG_LEVEL
      Configuration key to set the logging level for the map task. The allowed logging levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.
      See Also:
    • MAPRED_REDUCE_TASK_LOG_LEVEL

      public static final String MAPRED_REDUCE_TASK_LOG_LEVEL
      Configuration key to set the logging level for the reduce task. The allowed logging levels are: OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.
      See Also:
    • DEFAULT_LOG_LEVEL

      public static final String DEFAULT_LOG_LEVEL
      Default logging level for map/reduce tasks.
      See Also:
    • WORKFLOW_ID

      @Deprecated public static final String WORKFLOW_ID
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should use MRJobConfig.WORKFLOW_ID instead
      See Also:
    • WORKFLOW_NAME

      @Deprecated public static final String WORKFLOW_NAME
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should use MRJobConfig.WORKFLOW_NAME instead
      See Also:
    • WORKFLOW_NODE_NAME

      @Deprecated public static final String WORKFLOW_NODE_NAME
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should use MRJobConfig.WORKFLOW_NODE_NAME instead
      See Also:
    • WORKFLOW_ADJACENCY_PREFIX_STRING

      @Deprecated public static final String WORKFLOW_ADJACENCY_PREFIX_STRING
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should use MRJobConfig.WORKFLOW_ADJACENCY_PREFIX_STRING instead
      See Also:
    • WORKFLOW_ADJACENCY_PREFIX_PATTERN

      @Deprecated public static final String WORKFLOW_ADJACENCY_PREFIX_PATTERN
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should use MRJobConfig.WORKFLOW_ADJACENCY_PREFIX_PATTERN instead
      See Also:
    • WORKFLOW_TAGS

      @Deprecated public static final String WORKFLOW_TAGS
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should use MRJobConfig.WORKFLOW_TAGS instead
      See Also:
    • MAPREDUCE_RECOVER_JOB

      @Deprecated public static final String MAPREDUCE_RECOVER_JOB
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should not use it
      See Also:
    • DEFAULT_MAPREDUCE_RECOVER_JOB

      @Deprecated public static final boolean DEFAULT_MAPREDUCE_RECOVER_JOB
      Deprecated.
      The variable is kept for M/R 1.x applications, M/R 2.x applications should not use it
      See Also:
  • Constructor Details

    • JobConf

      public JobConf()
      Construct a map/reduce job configuration.
    • JobConf

      public JobConf(Class exampleClass)
      Construct a map/reduce job configuration.
      Parameters:
      exampleClass - a class whose containing jar is used as the job's jar.
    • JobConf

      public JobConf(Configuration conf)
      Construct a map/reduce job configuration.
      Parameters:
      conf - a Configuration whose settings will be inherited.
    • JobConf

      public JobConf(Configuration conf, Class exampleClass)
      Construct a map/reduce job configuration.
      Parameters:
      conf - a Configuration whose settings will be inherited.
      exampleClass - a class whose containing jar is used as the job's jar.
    • JobConf

      public JobConf(String config)
      Construct a map/reduce configuration.
      Parameters:
      config - a Configuration-format XML job description file.
    • JobConf

      public JobConf(Path config)
      Construct a map/reduce configuration.
      Parameters:
      config - a Configuration-format XML job description file.
    • JobConf

      public JobConf(boolean loadDefaults)
      A new map/reduce configuration where the behavior of reading from the default resources can be turned off.

      If the parameter loadDefaults is false, the new instance will not load resources from the default files.

      Parameters:
      loadDefaults - specifies whether to load from the default files
  • Method Details

    • getCredentials

      public Credentials getCredentials()
      Get credentials for the job.
      Returns:
      credentials for the job
    • setCredentials

      @Private public void setCredentials(Credentials credentials)
    • getJar

      public String getJar()
      Get the user jar for the map-reduce job.
      Returns:
      the user jar for the map-reduce job.
    • setJar

      public void setJar(String jar)
      Set the user jar for the map-reduce job.
      Parameters:
      jar - the user jar for the map-reduce job.
    • getJarUnpackPattern

      public Pattern getJarUnpackPattern()
      Get the pattern for jar contents to unpack on the tasktracker
    • setJarByClass

      public void setJarByClass(Class cls)
      Set the job's jar file by finding an example class location.
      Parameters:
      cls - the example class.
    • getLocalDirs

      public String[] getLocalDirs() throws IOException
      Throws:
      IOException
    • deleteLocalFiles

      @Deprecated public void deleteLocalFiles() throws IOException
      Deprecated.
      Use MRAsyncDiskService.moveAndDeleteAllVolumes instead.
      Throws:
      IOException
    • deleteLocalFiles

      public void deleteLocalFiles(String subdir) throws IOException
      Throws:
      IOException
    • getLocalPath

      public Path getLocalPath(String pathString) throws IOException
      Constructs a local file name. Files are distributed among configured local directories.
      Throws:
      IOException
    • getUser

      public String getUser()
      Get the reported username for this job.
      Returns:
      the username
    • setUser

      public void setUser(String user)
      Set the reported username for this job.
      Parameters:
      user - the username for this job.
    • setKeepFailedTaskFiles

      public void setKeepFailedTaskFiles(boolean keep)
      Set whether the framework should keep the intermediate files for failed tasks.
      Parameters:
      keep - true if framework should keep the intermediate files for failed tasks, false otherwise.
    • getKeepFailedTaskFiles

      public boolean getKeepFailedTaskFiles()
      Should the temporary files for failed tasks be kept?
      Returns:
      should the files be kept?
    • setKeepTaskFilesPattern

      public void setKeepTaskFilesPattern(String pattern)
      Set a regular expression for task names that should be kept. The regular expression ".*_m_000123_0" would keep the files for the first instance of map 123 that ran.
      Parameters:
      pattern - the java.util.regex.Pattern to match against the task names.
    • getKeepTaskFilesPattern

      public String getKeepTaskFilesPattern()
      Get the regular expression that is matched against the task names to see if we need to keep the files.
      Returns:
      the pattern as a string, if it was set, othewise null.
    • setWorkingDirectory

      public void setWorkingDirectory(Path dir)
      Set the current working directory for the default file system.
      Parameters:
      dir - the new current working directory.
    • getWorkingDirectory

      public Path getWorkingDirectory()
      Get the current working directory for the default file system.
      Returns:
      the directory name.
    • setNumTasksToExecutePerJvm

      public void setNumTasksToExecutePerJvm(int numTasks)
      Sets the number of tasks that a spawned task JVM should run before it exits
      Parameters:
      numTasks - the number of tasks to execute; defaults to 1; -1 signifies no limit
    • getNumTasksToExecutePerJvm

      public int getNumTasksToExecutePerJvm()
      Get the number of tasks that a spawned JVM should execute
    • getInputFormat

      public InputFormat getInputFormat()
      Get the InputFormat implementation for the map-reduce job, defaults to TextInputFormat if not specified explicity.
      Returns:
      the InputFormat implementation for the map-reduce job.
    • setInputFormat

      public void setInputFormat(Class<? extends InputFormat> theClass)
      Set the InputFormat implementation for the map-reduce job.
      Parameters:
      theClass - the InputFormat implementation for the map-reduce job.
    • getOutputFormat

      public OutputFormat getOutputFormat()
      Get the OutputFormat implementation for the map-reduce job, defaults to TextOutputFormat if not specified explicity.
      Returns:
      the OutputFormat implementation for the map-reduce job.
    • getOutputCommitter

      public OutputCommitter getOutputCommitter()
      Get the OutputCommitter implementation for the map-reduce job, defaults to FileOutputCommitter if not specified explicitly.
      Returns:
      the OutputCommitter implementation for the map-reduce job.
    • setOutputCommitter

      public void setOutputCommitter(Class<? extends OutputCommitter> theClass)
      Set the OutputCommitter implementation for the map-reduce job.
      Parameters:
      theClass - the OutputCommitter implementation for the map-reduce job.
    • setOutputFormat

      public void setOutputFormat(Class<? extends OutputFormat> theClass)
      Set the OutputFormat implementation for the map-reduce job.
      Parameters:
      theClass - the OutputFormat implementation for the map-reduce job.
    • setCompressMapOutput

      public void setCompressMapOutput(boolean compress)
      Should the map outputs be compressed before transfer?
      Parameters:
      compress - should the map outputs be compressed?
    • getCompressMapOutput

      public boolean getCompressMapOutput()
      Are the outputs of the maps be compressed?
      Returns:
      true if the outputs of the maps are to be compressed, false otherwise.
    • setMapOutputCompressorClass

      public void setMapOutputCompressorClass(Class<? extends CompressionCodec> codecClass)
      Set the given class as the CompressionCodec for the map outputs.
      Parameters:
      codecClass - the CompressionCodec class that will compress the map outputs.
    • getMapOutputCompressorClass

      public Class<? extends CompressionCodec> getMapOutputCompressorClass(Class<? extends CompressionCodec> defaultValue)
      Get the CompressionCodec for compressing the map outputs.
      Parameters:
      defaultValue - the CompressionCodec to return if not set
      Returns:
      the CompressionCodec class that should be used to compress the map outputs.
      Throws:
      IllegalArgumentException - if the class was specified, but not found
    • getMapOutputKeyClass

      public Class<?> getMapOutputKeyClass()
      Get the key class for the map output data. If it is not set, use the (final) output key class. This allows the map output key class to be different than the final output key class.
      Returns:
      the map output key class.
    • setMapOutputKeyClass

      public void setMapOutputKeyClass(Class<?> theClass)
      Set the key class for the map output data. This allows the user to specify the map output key class to be different than the final output value class.
      Parameters:
      theClass - the map output key class.
    • getMapOutputValueClass

      public Class<?> getMapOutputValueClass()
      Get the value class for the map output data. If it is not set, use the (final) output value class This allows the map output value class to be different than the final output value class.
      Returns:
      the map output value class.
    • setMapOutputValueClass

      public void setMapOutputValueClass(Class<?> theClass)
      Set the value class for the map output data. This allows the user to specify the map output value class to be different than the final output value class.
      Parameters:
      theClass - the map output value class.
    • getOutputKeyClass

      public Class<?> getOutputKeyClass()
      Get the key class for the job output data.
      Returns:
      the key class for the job output data.
    • setOutputKeyClass

      public void setOutputKeyClass(Class<?> theClass)
      Set the key class for the job output data.
      Parameters:
      theClass - the key class for the job output data.
    • getOutputKeyComparator

      public RawComparator getOutputKeyComparator()
      Get the RawComparator comparator used to compare keys.
      Returns:
      the RawComparator comparator used to compare keys.
    • setOutputKeyComparatorClass

      public void setOutputKeyComparatorClass(Class<? extends RawComparator> theClass)
      Set the RawComparator comparator used to compare keys.
      Parameters:
      theClass - the RawComparator comparator used to compare keys.
      See Also:
    • setKeyFieldComparatorOptions

      public void setKeyFieldComparatorOptions(String keySpec)
      Set the KeyFieldBasedComparator options used to compare keys.
      Parameters:
      keySpec - the key specification of the form -k pos1[,pos2], where, pos is of the form f[.c][opts], where f is the number of the key field to use, and c is the number of the first character from the beginning of the field. Fields and character posns are numbered starting with 1; a character position of zero in pos2 indicates the field's last character. If '.c' is omitted from pos1, it defaults to 1 (the beginning of the field); if omitted from pos2, it defaults to 0 (the end of the field). opts are ordering options. The supported options are: -n, (Sort numerically) -r, (Reverse the result of comparison)
    • getKeyFieldComparatorOption

      public String getKeyFieldComparatorOption()
      Get the KeyFieldBasedComparator options
    • setKeyFieldPartitionerOptions

      public void setKeyFieldPartitionerOptions(String keySpec)
      Set the KeyFieldBasedPartitioner options used for Partitioner
      Parameters:
      keySpec - the key specification of the form -k pos1[,pos2], where, pos is of the form f[.c][opts], where f is the number of the key field to use, and c is the number of the first character from the beginning of the field. Fields and character posns are numbered starting with 1; a character position of zero in pos2 indicates the field's last character. If '.c' is omitted from pos1, it defaults to 1 (the beginning of the field); if omitted from pos2, it defaults to 0 (the end of the field).
    • getKeyFieldPartitionerOption

      public String getKeyFieldPartitionerOption()
      Get the KeyFieldBasedPartitioner options
    • getCombinerKeyGroupingComparator

      public RawComparator getCombinerKeyGroupingComparator()
      Get the user defined WritableComparable comparator for grouping keys of inputs to the combiner.
      Returns:
      comparator set by the user for grouping values.
      See Also:
    • getOutputValueGroupingComparator

      public RawComparator getOutputValueGroupingComparator()
      Get the user defined WritableComparable comparator for grouping keys of inputs to the reduce.
      Returns:
      comparator set by the user for grouping values.
      See Also:
    • setCombinerKeyGroupingComparator

      public void setCombinerKeyGroupingComparator(Class<? extends RawComparator> theClass)
      Set the user defined RawComparator comparator for grouping keys in the input to the combiner.

      This comparator should be provided if the equivalence rules for keys for sorting the intermediates are different from those for grouping keys before each call to Reducer.reduce(Object, java.util.Iterator, OutputCollector, Reporter).

      For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed in a single call to the reduce function if K1 and K2 compare as equal.

      Since setOutputKeyComparatorClass(Class) can be used to control how keys are sorted, this can be used in conjunction to simulate secondary sort on values.

      Note: This is not a guarantee of the combiner sort being stable in any sense. (In any case, with the order of available map-outputs to the combiner being non-deterministic, it wouldn't make that much sense.)

      Parameters:
      theClass - the comparator class to be used for grouping keys for the combiner. It should implement RawComparator.
      See Also:
    • setOutputValueGroupingComparator

      public void setOutputValueGroupingComparator(Class<? extends RawComparator> theClass)
      Set the user defined RawComparator comparator for grouping keys in the input to the reduce.

      This comparator should be provided if the equivalence rules for keys for sorting the intermediates are different from those for grouping keys before each call to Reducer.reduce(Object, java.util.Iterator, OutputCollector, Reporter).

      For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed in a single call to the reduce function if K1 and K2 compare as equal.

      Since setOutputKeyComparatorClass(Class) can be used to control how keys are sorted, this can be used in conjunction to simulate secondary sort on values.

      Note: This is not a guarantee of the reduce sort being stable in any sense. (In any case, with the order of available map-outputs to the reduce being non-deterministic, it wouldn't make that much sense.)

      Parameters:
      theClass - the comparator class to be used for grouping keys. It should implement RawComparator.
      See Also:
    • getUseNewMapper

      public boolean getUseNewMapper()
      Should the framework use the new context-object code for running the mapper?
      Returns:
      true, if the new api should be used
    • setUseNewMapper

      public void setUseNewMapper(boolean flag)
      Set whether the framework should use the new api for the mapper. This is the default for jobs submitted with the new Job api.
      Parameters:
      flag - true, if the new api should be used
    • getUseNewReducer

      public boolean getUseNewReducer()
      Should the framework use the new context-object code for running the reducer?
      Returns:
      true, if the new api should be used
    • setUseNewReducer

      public void setUseNewReducer(boolean flag)
      Set whether the framework should use the new api for the reducer. This is the default for jobs submitted with the new Job api.
      Parameters:
      flag - true, if the new api should be used
    • getOutputValueClass

      public Class<?> getOutputValueClass()
      Get the value class for job outputs.
      Returns:
      the value class for job outputs.
    • setOutputValueClass

      public void setOutputValueClass(Class<?> theClass)
      Set the value class for job outputs.
      Parameters:
      theClass - the value class for job outputs.
    • getMapperClass

      public Class<? extends Mapper> getMapperClass()
      Get the Mapper class for the job.
      Returns:
      the Mapper class for the job.
    • setMapperClass

      public void setMapperClass(Class<? extends Mapper> theClass)
      Set the Mapper class for the job.
      Parameters:
      theClass - the Mapper class for the job.
    • getMapRunnerClass

      public Class<? extends MapRunnable> getMapRunnerClass()
      Get the MapRunnable class for the job.
      Returns:
      the MapRunnable class for the job.
    • setMapRunnerClass

      public void setMapRunnerClass(Class<? extends MapRunnable> theClass)
      Expert: Set the MapRunnable class for the job. Typically used to exert greater control on Mappers.
      Parameters:
      theClass - the MapRunnable class for the job.
    • getPartitionerClass

      public Class<? extends Partitioner> getPartitionerClass()
      Get the Partitioner used to partition Mapper-outputs to be sent to the Reducers.
      Returns:
      the Partitioner used to partition map-outputs.
    • setPartitionerClass

      public void setPartitionerClass(Class<? extends Partitioner> theClass)
      Set the Partitioner class used to partition Mapper-outputs to be sent to the Reducers.
      Parameters:
      theClass - the Partitioner used to partition map-outputs.
    • getReducerClass

      public Class<? extends Reducer> getReducerClass()
      Get the Reducer class for the job.
      Returns:
      the Reducer class for the job.
    • setReducerClass

      public void setReducerClass(Class<? extends Reducer> theClass)
      Set the Reducer class for the job.
      Parameters:
      theClass - the Reducer class for the job.
    • getCombinerClass

      public Class<? extends Reducer> getCombinerClass()
      Get the user-defined combiner class used to combine map-outputs before being sent to the reducers. Typically the combiner is same as the the Reducer for the job i.e. getReducerClass().
      Returns:
      the user-defined combiner class used to combine map-outputs.
    • setCombinerClass

      public void setCombinerClass(Class<? extends Reducer> theClass)
      Set the user-defined combiner class used to combine map-outputs before being sent to the reducers.

      The combiner is an application-specified aggregation operation, which can help cut down the amount of data transferred between the Mapper and the Reducer, leading to better performance.

      The framework may invoke the combiner 0, 1, or multiple times, in both the mapper and reducer tasks. In general, the combiner is called as the sort/merge result is written to disk. The combiner must:

      • be side-effect free
      • have the same input and output key types and the same input and output value types

      Typically the combiner is same as the Reducer for the job i.e. setReducerClass(Class).

      Parameters:
      theClass - the user-defined combiner class used to combine map-outputs.
    • getSpeculativeExecution

      public boolean getSpeculativeExecution()
      Should speculative execution be used for this job? Defaults to true.
      Returns:
      true if speculative execution be used for this job, false otherwise.
    • setSpeculativeExecution

      public void setSpeculativeExecution(boolean speculativeExecution)
      Turn speculative execution on or off for this job.
      Parameters:
      speculativeExecution - true if speculative execution should be turned on, else false.
    • getMapSpeculativeExecution

      public boolean getMapSpeculativeExecution()
      Should speculative execution be used for this job for map tasks? Defaults to true.
      Returns:
      true if speculative execution be used for this job for map tasks, false otherwise.
    • setMapSpeculativeExecution

      public void setMapSpeculativeExecution(boolean speculativeExecution)
      Turn speculative execution on or off for this job for map tasks.
      Parameters:
      speculativeExecution - true if speculative execution should be turned on for map tasks, else false.
    • getReduceSpeculativeExecution

      public boolean getReduceSpeculativeExecution()
      Should speculative execution be used for this job for reduce tasks? Defaults to true.
      Returns:
      true if speculative execution be used for reduce tasks for this job, false otherwise.
    • setReduceSpeculativeExecution

      public void setReduceSpeculativeExecution(boolean speculativeExecution)
      Turn speculative execution on or off for this job for reduce tasks.
      Parameters:
      speculativeExecution - true if speculative execution should be turned on for reduce tasks, else false.
    • getNumMapTasks

      public int getNumMapTasks()
      Get the configured number of map tasks for this job. Defaults to 1.
      Returns:
      the number of map tasks for this job.
    • setNumMapTasks

      public void setNumMapTasks(int n)
      Set the number of map tasks for this job.

      Note: This is only a hint to the framework. The actual number of spawned map tasks depends on the number of InputSplits generated by the job's InputFormat.getSplits(JobConf, int). A custom InputFormat is typically used to accurately control the number of map tasks for the job.

      How many maps?

      The number of maps is usually driven by the total size of the inputs i.e. total number of blocks of the input files.

      The right level of parallelism for maps seems to be around 10-100 maps per-node, although it has been set up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

      The default behavior of file-based InputFormats is to split the input into logical InputSplits based on the total size, in bytes, of input files. However, the FileSystem blocksize of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapreduce.input.fileinputformat.split.minsize.

      Thus, if you expect 10TB of input data and have a blocksize of 128MB, you'll end up with 82,000 maps, unless setNumMapTasks(int) is used to set it even higher.

      Parameters:
      n - the number of map tasks for this job.
      See Also:
    • getNumReduceTasks

      public int getNumReduceTasks()
      Get the configured number of reduce tasks for this job. Defaults to 1.
      Returns:
      the number of reduce tasks for this job.
    • setNumReduceTasks

      public void setNumReduceTasks(int n)
      Set the requisite number of reduce tasks for this job. How many reduces?

      The right number of reduces seems to be 0.95 or 1.75 multiplied by ( available memory for reduce tasks (The value of this should be smaller than numNodes * yarn.nodemanager.resource.memory-mb since the resource of memory is shared by map tasks and other applications) / mapreduce.reduce.memory.mb).

      With 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. With 1.75 the faster nodes will finish their first round of reduces and launch a second wave of reduces doing a much better job of load balancing.

      Increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.

      The scaling factors above are slightly less than whole numbers to reserve a few reduce slots in the framework for speculative-tasks, failures etc.

      Reducer NONE

      It is legal to set the number of reduce-tasks to zero.

      In this case the output of the map-tasks directly go to distributed file-system, to the path set by FileOutputFormat.setOutputPath(JobConf, Path). Also, the framework doesn't sort the map-outputs before writing it out to HDFS.

      Parameters:
      n - the number of reduce tasks for this job.
    • getMaxMapAttempts

      public int getMaxMapAttempts()
      Get the configured number of maximum attempts that will be made to run a map task, as specified by the mapreduce.map.maxattempts property. If this property is not already set, the default is 4 attempts.
      Returns:
      the max number of attempts per map task.
    • setMaxMapAttempts

      public void setMaxMapAttempts(int n)
      Expert: Set the number of maximum attempts that will be made to run a map task.
      Parameters:
      n - the number of attempts per map task.
    • getMaxReduceAttempts

      public int getMaxReduceAttempts()
      Get the configured number of maximum attempts that will be made to run a reduce task, as specified by the mapreduce.reduce.maxattempts property. If this property is not already set, the default is 4 attempts.
      Returns:
      the max number of attempts per reduce task.
    • setMaxReduceAttempts

      public void setMaxReduceAttempts(int n)
      Expert: Set the number of maximum attempts that will be made to run a reduce task.
      Parameters:
      n - the number of attempts per reduce task.
    • getJobName

      public String getJobName()
      Get the user-specified job name. This is only used to identify the job to the user.
      Returns:
      the job's name, defaulting to "".
    • setJobName

      public void setJobName(String name)
      Set the user-specified job name.
      Parameters:
      name - the job's new name.
    • getSessionId

      @Deprecated public String getSessionId()
      Deprecated.
      Get the user-specified session identifier. The default is the empty string. The session identifier is used to tag metric data that is reported to some performance metrics system via the org.apache.hadoop.metrics API. The session identifier is intended, in particular, for use by Hadoop-On-Demand (HOD) which allocates a virtual Hadoop cluster dynamically and transiently. HOD will set the session identifier by modifying the mapred-site.xml file before starting the cluster. When not running under HOD, this identifer is expected to remain set to the empty string.
      Returns:
      the session identifier, defaulting to "".
    • setSessionId

      @Deprecated public void setSessionId(String sessionId)
      Deprecated.
      Set the user-specified session identifier.
      Parameters:
      sessionId - the new session id.
    • setMaxTaskFailuresPerTracker

      public void setMaxTaskFailuresPerTracker(int noFailures)
      Set the maximum no. of failures of a given job per tasktracker. If the no. of task failures exceeds noFailures, the tasktracker is blacklisted for this job.
      Parameters:
      noFailures - maximum no. of failures of a given job per tasktracker.
    • getMaxTaskFailuresPerTracker

      public int getMaxTaskFailuresPerTracker()
      Expert: Get the maximum no. of failures of a given job per tasktracker. If the no. of task failures exceeds this, the tasktracker is blacklisted for this job.
      Returns:
      the maximum no. of failures of a given job per tasktracker.
    • getMaxMapTaskFailuresPercent

      public int getMaxMapTaskFailuresPercent()
      Get the maximum percentage of map tasks that can fail without the job being aborted. Each map task is executed a minimum of getMaxMapAttempts() attempts before being declared as failed. Defaults to zero, i.e. any failed map-task results in the job being declared as JobStatus.FAILED.
      Returns:
      the maximum percentage of map tasks that can fail without the job being aborted.
    • setMaxMapTaskFailuresPercent

      public void setMaxMapTaskFailuresPercent(int percent)
      Expert: Set the maximum percentage of map tasks that can fail without the job being aborted. Each map task is executed a minimum of getMaxMapAttempts() attempts before being declared as failed.
      Parameters:
      percent - the maximum percentage of map tasks that can fail without the job being aborted.
    • getMaxReduceTaskFailuresPercent

      public int getMaxReduceTaskFailuresPercent()
      Get the maximum percentage of reduce tasks that can fail without the job being aborted. Each reduce task is executed a minimum of getMaxReduceAttempts() attempts before being declared as failed. Defaults to zero, i.e. any failed reduce-task results in the job being declared as JobStatus.FAILED.
      Returns:
      the maximum percentage of reduce tasks that can fail without the job being aborted.
    • setMaxReduceTaskFailuresPercent

      public void setMaxReduceTaskFailuresPercent(int percent)
      Set the maximum percentage of reduce tasks that can fail without the job being aborted. Each reduce task is executed a minimum of getMaxReduceAttempts() attempts before being declared as failed.
      Parameters:
      percent - the maximum percentage of reduce tasks that can fail without the job being aborted.
    • setJobPriority

      public void setJobPriority(JobPriority prio)
      Set JobPriority for this job.
      Parameters:
      prio - the JobPriority for this job.
    • setJobPriorityAsInteger

      public void setJobPriorityAsInteger(int prio)
      Set JobPriority for this job.
      Parameters:
      prio - the JobPriority for this job.
    • getJobPriority

      public JobPriority getJobPriority()
      Get the JobPriority for this job.
      Returns:
      the JobPriority for this job.
    • getJobPriorityAsInteger

      public int getJobPriorityAsInteger()
      Get the priority for this job.
      Returns:
      the priority for this job.
    • getProfileEnabled

      public boolean getProfileEnabled()
      Get whether the task profiling is enabled.
      Returns:
      true if some tasks will be profiled
    • setProfileEnabled

      public void setProfileEnabled(boolean newValue)
      Set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory.
      Parameters:
      newValue - true means it should be gathered
    • getProfileParams

      public String getProfileParams()
      Get the profiler configuration arguments. The default value for this property is "-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s"
      Returns:
      the parameters to pass to the task child to configure profiling
    • setProfileParams

      public void setProfileParams(String value)
      Set the profiler configuration arguments. If the string contains a '%s' it will be replaced with the name of the profiling output file when the task runs. This value is passed to the task child JVM on the command line.
      Parameters:
      value - the configuration string
    • getProfileTaskRange

      public org.apache.hadoop.conf.Configuration.IntegerRanges getProfileTaskRange(boolean isMap)
      Get the range of maps or reduces to profile.
      Parameters:
      isMap - is the task a map?
      Returns:
      the task ranges
    • setProfileTaskRange

      public void setProfileTaskRange(boolean isMap, String newValue)
      Set the ranges of maps or reduces to profile. setProfileEnabled(true) must also be called.
      Parameters:
      newValue - a set of integer ranges of the map ids
    • setMapDebugScript

      public void setMapDebugScript(String mDbgScript)
      Set the debug script to run when the map tasks fail.

      The debug script can aid debugging of failed map tasks. The script is given task's stdout, stderr, syslog, jobconf files as arguments.

      The debug command, run on the node where the map failed, is:

       $script $stdout $stderr $syslog $jobconf.
       

      The script file is distributed through DistributedCache APIs. The script needs to be symlinked.

      Here is an example on how to submit a script

       job.setMapDebugScript("./myscript");
       DistributedCache.createSymlink(job);
       DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
       
      Parameters:
      mDbgScript - the script name
    • getMapDebugScript

      public String getMapDebugScript()
      Get the map task's debug script.
      Returns:
      the debug Script for the mapred job for failed map tasks.
      See Also:
    • setReduceDebugScript

      public void setReduceDebugScript(String rDbgScript)
      Set the debug script to run when the reduce tasks fail.

      The debug script can aid debugging of failed reduce tasks. The script is given task's stdout, stderr, syslog, jobconf files as arguments.

      The debug command, run on the node where the map failed, is:

       $script $stdout $stderr $syslog $jobconf.
       

      The script file is distributed through DistributedCache APIs. The script file needs to be symlinked

      Here is an example on how to submit a script

       job.setReduceDebugScript("./myscript");
       DistributedCache.createSymlink(job);
       DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
       
      Parameters:
      rDbgScript - the script name
    • getReduceDebugScript

      public String getReduceDebugScript()
      Get the reduce task's debug Script
      Returns:
      the debug script for the mapred job for failed reduce tasks.
      See Also:
    • getJobEndNotificationURI

      public String getJobEndNotificationURI()
      Get the uri to be invoked in-order to send a notification after the job has completed (success/failure).
      Returns:
      the job end notification uri, null if it hasn't been set.
      See Also:
    • setJobEndNotificationURI

      public void setJobEndNotificationURI(String uri)
      Set the uri to be invoked in-order to send a notification after the job has completed (success/failure).

      The uri can contain 2 special parameters: $jobId and $jobStatus. Those, if present, are replaced by the job's identifier and completion-status respectively.

      This is typically used by application-writers to implement chaining of Map-Reduce jobs in an asynchronous manner.

      Parameters:
      uri - the job end notification uri
      See Also:
    • getJobEndNotificationCustomNotifierClass

      public String getJobEndNotificationCustomNotifierClass()
      Returns the class to be invoked in order to send a notification after the job has completed (success/failure).
      Returns:
      the fully-qualified name of the class which implements CustomJobEndNotifier set through the MRJobConfig.MR_JOB_END_NOTIFICATION_CUSTOM_NOTIFIER_CLASS property
      See Also:
    • setJobEndNotificationCustomNotifierClass

      public void setJobEndNotificationCustomNotifierClass(String customNotifierClassName)
      Sets the class to be invoked in order to send a notification after the job has completed (success/failure). A notification url still has to be set which will be passed to CustomJobEndNotifier.notifyOnce(java.net.URL, org.apache.hadoop.conf.Configuration) along with the Job's conf. If this is set instead of using a simple HttpURLConnection we'll create a new instance of this class which should be an implementation of CustomJobEndNotifier, and we'll invoke that.
      Parameters:
      customNotifierClassName - the fully-qualified name of the class which implements CustomJobEndNotifier
      See Also:
    • getJobLocalDir

      public String getJobLocalDir()
      Get job-specific shared directory for use as scratch space

      When a job starts, a shared directory is created at location ${mapreduce.cluster.local.dir}/taskTracker/$user/jobcache/$jobid/work/ . This directory is exposed to the users through mapreduce.job.local.dir . So, the tasks can use this space as scratch space and share files among them.

      This value is available as System property also.
      Returns:
      The localized job specific shared directory
    • getMemoryForMapTask

      public long getMemoryForMapTask()
      Get memory required to run a map task of the job, in MB. If a value is specified in the configuration, it is returned. Else, it returns MRJobConfig.DEFAULT_MAP_MEMORY_MB.

      For backward compatibility, if the job configuration sets the key MAPRED_TASK_MAXVMEM_PROPERTY to a value different from DISABLED_MEMORY_LIMIT, that value will be used after converting it from bytes to MB.

      Returns:
      memory required to run a map task of the job, in MB,
    • setMemoryForMapTask

      public void setMemoryForMapTask(long mem)
    • getMemoryForReduceTask

      public long getMemoryForReduceTask()
      Get memory required to run a reduce task of the job, in MB. If a value is specified in the configuration, it is returned. Else, it returns MRJobConfig.DEFAULT_REDUCE_MEMORY_MB.

      For backward compatibility, if the job configuration sets the key MAPRED_TASK_MAXVMEM_PROPERTY to a value different from DISABLED_MEMORY_LIMIT, that value will be used after converting it from bytes to MB.

      Returns:
      memory required to run a reduce task of the job, in MB.
    • setMemoryForReduceTask

      public void setMemoryForReduceTask(long mem)
    • getQueueName

      public String getQueueName()
      Return the name of the queue to which this job is submitted. Defaults to 'default'.
      Returns:
      name of the queue
    • setQueueName

      public void setQueueName(String queueName)
      Set the name of the queue to which this job should be submitted.
      Parameters:
      queueName - Name of the queue
    • normalizeMemoryConfigValue

      public static long normalizeMemoryConfigValue(long val)
      Normalize the negative values in configuration
      Parameters:
      val -
      Returns:
      normalized value
    • findContainingJar

      public static String findContainingJar(Class my_class)
      Find a jar that contains a class of the same name, if any. It will return a jar file, even if that is not the first thing on the class path that has a class with the same name.
      Parameters:
      my_class - the class to find.
      Returns:
      a jar file that contains the class, or null.
    • getMaxVirtualMemoryForTask

      @Deprecated public long getMaxVirtualMemoryForTask()
      Get the memory required to run a task of this job, in bytes. See MAPRED_TASK_MAXVMEM_PROPERTY

      This method is deprecated. Now, different memory limits can be set for map and reduce tasks of a job, in MB.

      For backward compatibility, if the job configuration sets the key MAPRED_TASK_MAXVMEM_PROPERTY, that value is returned. Otherwise, this method will return the larger of the values returned by getMemoryForMapTask() and getMemoryForReduceTask() after converting them into bytes.

      Returns:
      Memory required to run a task of this job, in bytes.
      See Also:
    • setMaxVirtualMemoryForTask

      @Deprecated public void setMaxVirtualMemoryForTask(long vmem)
      Set the maximum amount of memory any task of this job can use. See MAPRED_TASK_MAXVMEM_PROPERTY

      mapred.task.maxvmem is split into mapreduce.map.memory.mb and mapreduce.map.memory.mb,mapred each of the new key are set as mapred.task.maxvmem / 1024 as new values are in MB

      Parameters:
      vmem - Maximum amount of virtual memory in bytes any task of this job can use.
      See Also:
    • getMaxPhysicalMemoryForTask

      @Deprecated public long getMaxPhysicalMemoryForTask()
      Deprecated.
      this variable is deprecated and nolonger in use.
    • setMaxPhysicalMemoryForTask

      @Deprecated public void setMaxPhysicalMemoryForTask(long mem)
      Deprecated.
    • getTaskJavaOpts

      @Private public String getTaskJavaOpts(TaskType taskType)
    • parseMaximumHeapSizeMB

      @Private @VisibleForTesting public static int parseMaximumHeapSizeMB(String javaOpts)
      Parse the Maximum heap size from the java opts as specified by the -Xmx option Format: -Xmx<size>[g|G|m|M|k|K]
      Parameters:
      javaOpts - String to parse to read maximum heap size
      Returns:
      Maximum heap size in MB or -1 if not specified
    • getMemoryRequired

      @Private public int getMemoryRequired(TaskType taskType)
    • main

      public static void main(String[] args) throws Exception
      Throws:
      Exception