Class DataDrivenDBInputFormat<T extends DBWritable>

All Implemented Interfaces:
Configurable
Direct Known Subclasses:
OracleDataDrivenDBInputFormat

@Public @Evolving public class DataDrivenDBInputFormat<T extends DBWritable> extends DBInputFormat<T> implements Configurable
A InputFormat that reads input data from an SQL table. Operates like DBInputFormat, but instead of using LIMIT and OFFSET to demarcate splits, it tries to generate WHERE clauses which separate the data into roughly equivalent shards.
  • Field Details

    • SUBSTITUTE_TOKEN

      public static final String SUBSTITUTE_TOKEN
      If users are providing their own query, the following string is expected to appear in the WHERE clause, which will be substituted with a pair of conditions on the input to allow input splits to parallelise the import.
      See Also:
  • Constructor Details

    • DataDrivenDBInputFormat

      public DataDrivenDBInputFormat()
  • Method Details

    • getSplitter

      protected DBSplitter getSplitter(int sqlDataType)
      Returns:
      the DBSplitter implementation to use to divide the table/query into InputSplits.
    • getSplits

      public List<InputSplit> getSplits(JobContext job) throws IOException
      Logically split the set of input files for the job.

      Each InputSplit is then assigned to an individual Mapper for processing.

      Note: The split is a logical split of the inputs and the input files are not physically split into chunks. For e.g. a split could be <input-file-path, start, offset> tuple. The InputFormat also creates the RecordReader to read the InputSplit.

      Overrides:
      getSplits in class DBInputFormat<T extends DBWritable>
      Parameters:
      job - job configuration.
      Returns:
      an array of InputSplits for the job.
      Throws:
      IOException
    • getBoundingValsQuery

      protected String getBoundingValsQuery()
      Returns:
      a query which returns the minimum and maximum values for the order-by column. The min value should be in the first column, and the max value should be in the second column of the results.
    • setBoundingQuery

      public static void setBoundingQuery(Configuration conf, String query)
      Set the user-defined bounding query to use with a user-defined query. This *must* include the substring "$CONDITIONS" (DataDrivenDBInputFormat.SUBSTITUTE_TOKEN) inside the WHERE clause, so that DataDrivenDBInputFormat knows where to insert split clauses. e.g., "SELECT foo FROM mytable WHERE $CONDITIONS" This will be expanded to something like: SELECT foo FROM mytable WHERE (id > 100) AND (id < 250) inside each split.
    • createDBRecordReader

      protected RecordReader<LongWritable,T> createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split, Configuration conf) throws IOException
      Overrides:
      createDBRecordReader in class DBInputFormat<T extends DBWritable>
      Throws:
      IOException
    • setInput

      public static void setInput(Job job, Class<? extends DBWritable> inputClass, String tableName, String conditions, String splitBy, String... fieldNames)
      Note that the "orderBy" column is called the "splitBy" in this version. We reuse the same field, but it's not strictly ordering it -- just partitioning the results.
    • setInput

      public static void setInput(Job job, Class<? extends DBWritable> inputClass, String inputQuery, String inputBoundingQuery)
      setInput() takes a custom query and a separate "bounding query" to use instead of the custom "count query" used by DBInputFormat.