All Classes and Interfaces
Class
Description
Enum representing various ABFS backoff metrics
Enum representing the status of an ABFS block.
Exception to be thrown if any Runtime Exception occurs.
ABFS error constants.
Responsible to keep all constant keys used in abfs rest client here.
Specifies the version of the REST protocol used for processing the request.
Exception to wrap invalid checksum verification on client side.
Enum representing various ABFS read footer metrics.
Enum for ABFS Input Policies.
Enum representing the set of metrics tracked for the ABFS read thread pool.
Exception to wrap Azure service error responses.
The REST operation type (Read, Append, Other ).
Azure Storage Offers two sets of Rest APIs for interacting with the storage account.
Statistic which are collected in Abfs.
Abort data being written to a stream, so that close() does
not write the data.
An abstract class to provide common implementation for the Counters
container in both mapred and mapreduce packages.
This is a base class for DNS to Switch mappings.
Parent class of all the events.
This class provides an interface for implementors of a Hadoop file system
(analogous to the VFS of Unix).
Builder for filesystem/filecontext operations of various kinds,
with option support.
define enum for various type of conf
ganglia slope values which equal the ordinal
Subclass of
AbstractService that provides basic implementations
of the LaunchableService methods.A simple liveliness monitor with which clients can register, trust the
component to monitor liveliness, get a call-back on expiry and then finally
unregister.
Abstract base class for MapWritable and SortedMapWritable
Unlike org.apache.nutch.crawl.MapWritable, this class allows creation of
MapWritable<Writable, MapWritable> so the CLASS_TO_ID and ID_TO_CLASS
maps travel with the class instead of being static.
The immutable metric
Enumeration of Job UUID source.
This is the base implementation class for services.
An exception class for access control related issues.
Class representing a configured access control list.
This request object contains all the context information to determine whether
a user has permission to access the target entity.
Provide an OAuth2 access token to be used to authenticate http calls in
WebHDFS.
Access tokens generally expire.
Defines a single entry in an ACL.
Specifies the scope or intended usage of an ACL entry.
Class to pack an AclEntry into an integer.
Specifies the type of an ACL entry.
Protobuf enum
hadoop.hdfs.AclEntryProto.AclEntryScopeProtoProtobuf enum
hadoop.hdfs.AclEntryProto.AclEntryTypeProtoProtobuf enum
hadoop.hdfs.AclEntryProto.FsActionProtoAn AclStatus contains the ACL information of a specific file.
Composite service that exports the add/remove methods.
Enum of address types -as integers.
Expose adl:// scheme to access ADL file system.
Constants.
A FileSystem to access Azure Data Lake Store.
The core request sent by the
ApplicationMaster to the
ResourceManager to obtain resources in the cluster.Class to construct instances of
AllocateRequest with specific
options.The response sent by the
ResourceManager the
ApplicationMaster during resource negotiation.Class to describe all supported forms of namespaces for an allocation tag.
Command sent by the Resource Manager to the Application Master in the
AllocateResponse
AMRMClient<T extends org.apache.hadoop.yarn.client.api.AMRMClient.ContainerRequest>
AMRMClientAsync<T extends org.apache.hadoop.yarn.client.api.AMRMClient.ContainerRequest>
AMRMClientAsync handles communication with the ResourceManager
and provides asynchronous updates on events such as container allocations and
completions.AMRMTokenIdentifier is the TokenIdentifier to be used by
ApplicationMasters to authenticate to the ResourceManager.
Client for managing applications.
Application access types.
This entity represents an application attempt.
ApplicationAttemptId denotes the particular attempt
of an ApplicationMaster for a given ApplicationId.This exception is thrown on
(GetApplicationAttemptReportRequest)
API when the Application Attempt doesn't exist in Application History Server or
ApplicationMasterProtocol.allocate(AllocateRequest) if application
doesn't exist in RM.ApplicationAttemptReport is a report of an application attempt.A
URLClassLoader for application isolation.Deprecated.
The protocol between clients and the
ResourceManager
to submit/abort jobs and to get information on applications, cluster metrics,
nodes, queues and ACLs.This is the API for the applications comprising of constants that YARN sets
up for the applications and the containers.
The type of launch for the container.
Environment for Applications.
This entity represents an application.
The protocol between clients and the
ApplicationHistoryServer to
get the information of completed applications etc.ApplicationId represents the globally unique
identifier for an application.Exception to be thrown when Client submit an application without
providing
ApplicationId in ApplicationSubmissionContext.The ApplicationMaster for Dynamometer.
An ApplicationMaster for executing shell commands on a set of launched
containers using the YARN framework.
The protocol between a live instance of
ApplicationMaster
and the ResourceManager.This exception is thrown on
(GetApplicationReportRequest) API
when the Application doesn't exist in RM and AHSApplicationReport is a report of an application.Contains various scheduling metrics to be reported by UI and CLI.
Enumeration that controls the scope of applications fetched
ApplicationSubmissionContext represents all of the
information needed by the ResourceManager to launch
the ApplicationMaster for an application.ApplicationTimeout is a report for configured application timeouts.Application timeout type.
A dense file-based mapping from integers to values.
This class provides an implementation of ResetableIterator.
This class provides an implementation of ResetableIterator.
This is a wrapper class.
A Writable for arrays containing instances of a class.
Artifact of an service component.
Artifact Type.
Support IAM Assumed roles by instantiating an instance of
STSAssumeRoleSessionCredentialsProvider from configuration
properties, including wiring up the inner authenticator, and,
unless overridden, creating a session name from the current user.Dispatches
Events in a separate thread.Operations which are allowed in Node Attributes Expression.
Interface defining an audit logger.
Flags which can be passed down during initialization, or after it.
Define the type of command, either read or write.
Definitions of the various commands that can be replayed.
Counter definitions for replay.Responsible to keep all the Azure Blob File System auth related
configurations.
An exception class for authorization-related issues.
Auth Type Enum.
Adapts an
FSDataInputStream to Avro's SeekableInput interface.Tag interface for Avro 'reflect' serializable classes.
Serialization for Avro Reflect classes.
Base class for providing serialization to Avro types.
Serialization for Avro Specific classes.
Enum to map AWS SDK V1 Acl values to SDK V2.
Access levels.
Wrap a
S3Exception as an IOE, relaying all
getters.A specific exception from AWS operations.
Provide an Azure Active Directory supported
OAuth2 access token to be used to authenticate REST calls against Azure data
lake file system
AdlFileSystem.Base exception for any Azure Blob File System driver exceptions.
Provides the bridging logic between Hadoop's abstract filesystem and Azure Storage.
A Committer for the manifest committer which performs all bindings needed
to work best with abfs.
Azure service error codes.
Indicates that the operator has specified an invalid configuration
for fencing methods.
String Base64 configuration value Validator.
A base
SecretManager for AMs to extend and validate Client-RM tokens
issued to clients by the RM using the underlying master-key shared by RM to
the AMs on their launch.Interface filesystems MAY implement to offer a batched list.
Parameters for the sums
Implement DBSplitter over BigDecimal values.
Interface supported by
WritableComparable
types supporting ordering/permutation by a representative set of bytes.Partition
BinaryComparable keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes().Partition
BinaryComparable keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes().Deprecated.
Replaced by Avro.
Deprecated.
Replaced by Avro.
Combinable Flags to use when creating a service entry.
Binding information provided by a
RegistryBindingSourceThis is a special committer which creates the factory for the committer and
runs off that.
Enum for BlobCopyProgress.
Interface used to load provided blocks.
An abstract class used to read and write block maps for provided blocks.
A
CompressorStream which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.A
DecompressorStream which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.Represents the network location of a block, information about the hosts
that contain block replicas, and other block metadata (E.g. the file
offset associated with the block, length, whether it is corrupt, etc).
Operation kind.
Options that can be specified when manually triggering a block report.
Given an external reference, create a sequence of blocks and associated
metadata.
A storage policy specifies the placement of block replicas on specific
storage types.
Type of a block.
Implements a Bloom filter, as defined by Bloom in 1970.
This class extends
MapFile and provides very much the same
functionality.Boolean configuration value validator.
Implement DBSplitter over boolean values.
A WritableComparable for booleans.
A
CharSequence appender that considers its BoundedAppender.limit as upper
bound.Deprecated.
Replaced by Avro.
The current state of the gzip decoder, external to the Inflater context.
API for bulk deletion of objects/files,
but not directories.
Interface for bulk deletion.
Implementers of this interface provide a positioned read API that writes to a
ByteBuffer rather than a byte[].Implementers of this interface provide a read API that writes to a
ByteBuffer, not a byte[].
A byte sequence that is usable as a key or value.
A WritableComparable for a single byte.
This class provides output and input streams for bzip2 compression
and decompression.
A cached implementation of DNSToSwitchMapping that takes an
raw DNSToSwitchMapping and stores the resolved network location in
a cache.
Describes a path-based cache directive entry.
Describes a path-based cache directive.
Describes a path-based cache directive.
Specifies semantics for CacheDirective operations.
Describes a Cache Pool entry.
CachePoolInfo describes a cache pool.
CachePoolStats describes cache pool statistics.
A class defining the caller context for auditing coarse granularity
operations.
The request issued by the client to the
ResourceManager to cancel a
delegation token.The response from the
ResourceManager to a cancelDelegationToken
request.This exception is thrown when the length of a LocatedBlock instance
can not be obtained.
FSDataInputStreams implement this interface to indicate that they can clear
their buffers on request.
A state machine to keep track of current state of the de-coder
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
What to do when change is detected.
The S3 object attribute used to detect change.
Contract representing to the framework that the task can be safely preempted
and restarted between invocations of the user-defined function.
Thrown for checksum errors.
Abstract Checksumed FileSystem.
Enum of outcomes.
Interprets the map reduce cli options
A client for an IPC service.
Client for submitting a Dynamometer YARN application, and optionally, a
workload MapReduce job.
Client for Distributed Shell application submission to YARN.
Interface for providing client assertions for Azure Workload Identity authentication.
Protobuf enum
hadoop.hdfs.AddBlockFlagProtoProtobuf enum
hadoop.hdfs.CacheFlagProtoProtobuf enum
hadoop.hdfs.CreateFlagProto
type of the datanode report
Protobuf enum
hadoop.hdfs.OpenFilesTypeProtoProtobuf enum
hadoop.hdfs.RollingUpgradeActionProtoProtobuf enum
hadoop.hdfs.SafeModeActionProto
The protocol between clients and the
SharedCacheManager to claim
and release resources in the shared cache.A simple
SecretManager for AMs to validate Client-RM tokens issued to
clients by the RM using the underlying master-key shared by RM to the AMs on
their launch.A simple clock interface that gives you time.
Deprecated.
use java.io.Closeable
A task submitter which is closeable, and whose close() call
shuts down the pool.
Exception to denote if the underlying stream, cache or other closable resource
is closed.
Provides a way to access information about the map/reduce cluster.
This entity represents a YARN cluster.
Status information on the current state of the Map-Reduce cluster.
Status information on the current state of the Map-Reduce cluster.
Exception raised by HDFS indicating that storage capacity in the
cluster filesystem is exceeded.
Codec related constants.
A global compressor/decompressor pool used to save and reuse
(possibly native) compression/decompression codecs.
Collector info containing collector address and collector token passed from
RM to AM in Allocate Response.
An abstract
InputFormat that returns CombineFileSplit's
in InputFormat.getSplits(JobConf, int) method.An abstract
InputFormat that returns CombineFileSplit's in
InputFormat.getSplits(JobContext) method.A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit.A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit.A wrapper class for a record reader that handles a single file split.
A wrapper class for a record reader that handles a single file split.
A sub-collection of input files.
Input format that is a
CombineFileInputFormat-equivalent for
SequenceFileInputFormat.Input format that is a
CombineFileInputFormat-equivalent for
SequenceFileInputFormat.Input format that is a
CombineFileInputFormat-equivalent for
TextInputFormat.Input format that is a
CombineFileInputFormat-equivalent for
TextInputFormat.Constants for working with committers.
Response to Commit Container Request.
Statistic names for committers.
The common audit context is a map of common context information
which can be used with any audit span.
This class contains constants for configuration keys used
in the common code.
One or more components of the service.
Policy of restart component.
Containers of a component.
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
Additional operations required of a RecordReader to participate in a join.
Additional operations required of a RecordReader to participate in a join.
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
This InputSplit contains a set of child InputSplits.
This InputSplit contains a set of child InputSplits.
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
Composition of services.
A base-class for Writables which store themselves compressed and lazily
inflate on field access.
Compression algorithms.
This class encapsulates a streaming compression/decompression pair.
A factory that will find the correct codec for a given filename.
A compression input stream.
A compression output stream.
Specification of a stream-based 'compressor' which can be
plugged into a
CompressionOutputStream to compress data.Thrown when a concurrent write operation is detected.
Obtain an access token via a a credential (provided through the
Configuration) using the
Client Credentials Grant workflow.
A config file that needs to be created and made available as a volume in an
service component container.
Config Type.
Something that may be configured with a
Configuration.Provides access to configuration parameters.
Set of configuration properties that can be injected into the service
components via envs, files and custom pluggable helper docker containers.
This exception is thrown on unrecoverable configuration errors.
Responsible to keep all the Azure Blob File System configurations keys in Hadoop configuration file.
Thrown when a searched for element is not found
Base class for things that may be configured with a
Configuration.Enum of conflict resolution algorithms.
Supply a access token obtained via a refresh token (provided through the
Configuration using the second half of the
Authorization Code Grant workflow.
Thrown by
NetUtils.connect(java.net.Socket, java.net.SocketAddress, int)
if it times out while connecting to the remote host.Constants used with the
S3AFileSystem.Container represents an allocated resource in the cluster.An instance of a running service container.
This entity represents a container belonging to an application attempt.
Container exit statuses indicating special exit circumstances.
ContainerId represents a globally unique identifier
for a Container in the cluster.ContainerLaunchContext represents all of the information
needed by the NodeManager to launch a container.Enumeration of various aggregation type of a container log.
A simple log4j-appender for container's logs.
The protocol between an
ApplicationMaster and a
NodeManager to start/stop and increase resource of containers
and to get status of running containers.This exception is thrown on
(GetContainerReportRequest)
API when the container doesn't exist in AHSContainerReport is a report of an container.ContainerRetryContext indicates how container retry after it fails
to run.Retry policy for relaunching a
Container.A simple log4j-appender for container's logs.
State of a
Container.The current state of the container of an application.
ContainerStatus represents the current status of a
Container.Container Sub-State.
TokenIdentifier for a container.
The request sent by
Application Master to the
Node Manager to change the resource quota of a container.
The response sent by the
NodeManager to the
ApplicationMaster when asked to update container resource.Encodes the type of Container Update.
The content types such as file, directory and symlink to be computed.
Store the summary of a content (a directory or a file).
This class encapsulates a MapReduce job and its dependency.
This class contains a set of utilities which help converting data structures
from/to 'serializableFormat' to/from hadoop/nativejava data structures.
Indicates the checksum comparison result.
Hadoop counters for the DistCp CopyMapper.
The corruption reason code
A named counter that tracks the progress of a map/reduce job.
A group of
Counters that logically belong together.The common counter group interface.
A set of named counters.
Counters holds per job/task counters, defined either by the
Map-Reduce framework or applications.A counter record, comprising its name and value.
Group of counters, comprising of counters from a particular
counter Enum class.Implements a counting Bloom filter, as defined by Fan et al. in a ToN
2000 paper.
CreateEncryptionZoneFlag is used in
HdfsAdmin.createEncryptionZone(Path, String, EnumSet) to indicate
what should be done when creating an encryption zone.Custom
Counter definitions.CreateFlag specifies the file create semantic.
Enumeration of dir states in the dir map.
Obtain an access token via the credential-based OAuth2 workflow.
Exception which Hadoop's AWSCredentialsProvider implementations should
throw when there is a problem with the credential setup.
A provider of credentials or password for Hadoop applications.
A factory to create a list of CredentialProvider based on the path given in a
Configuration.
A class that provides the facilities of reading and writing
secret keys and Tokens.
Enum for CSE key types.
Protobuf enum
csi.v0.ControllerServiceCapability.RPC.TypeProtobuf enum
csi.v0.NodeServiceCapability.RPC.TypeProtobuf enum
csi.v0.PluginCapability.Service.TypeProtobuf enum
csi.v0.SnapshotStatus.TypeProtobuf enum
csi.v0.VolumeCapability.AccessMode.ModeDeprecated.
Replaced by Avro.
The checksum types
A InputFormat that reads input data from an SQL table.
A RecordReader that reads records from a SQL table,
using data-driven WHERE clause splits.
Enums for features that change the layout version.
Acquire block pool level and volume level lock first if you want to acquire dir lock.
Protobuf enum
hadoop.hdfs.datanode.BlockCommandProto.ActionProtobuf enum
hadoop.hdfs.datanode.BlockIdCommandProto.ActionProtobuf enum
hadoop.hdfs.datanode.DatanodeCommandProto.TypeProtobuf enum
hadoop.hdfs.datanode.ErrorReportRequestProto.ErrorCodeProtobuf enum
hadoop.hdfs.datanode.ReceivedDeletedBlockInfoProto.BlockStatusThe state of the storage.
OutputStream implementation that wraps a DataOutput.
Protobuf enum
hadoop.hdfs.DataTransferEncryptorMessageProto.DataTransferEncryptorStatusProtobuf enum
hadoop.hdfs.OpWriteBlockProto.BlockConstructionStageProtobuf enum
hadoop.hdfs.ShortCircuitFdResponse
Status is a 4-bit enum
Implement DBSplitter over date/time values.
A container for configuration property names for jobs with DB input/output.
A InputFormat that reads input data from an SQL table.
A OutputFormat that sends the reduce output to a SQL table.
A RecordReader that reads records from a SQL table.
DBSplitter will generate DBInputSplits to use with DataDrivenDBInputFormat.
Objects that are read from/written to a database should implement
DBWritable.Specifies the different types of decommissioning of nodes.
Specification of a stream-based 'de-compressor' which can be
plugged into a
CompressionInputStream to compress data.This class provides an interface for Namenode and Router to Audit events
information.
The default metrics system singleton.
DefaultStringifier is the default implementation of the
Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.Default indicates Ordered, preferred OpenSSL, if failed to load then fall
back to Default_JSSE.
All the constants related to delegation tokens.
The
DelegationTokenAuthenticatedURL is a
AuthenticatedURL sub-class with built-in Hadoop Delegation Token
functionality.Client side authentication token that handles Delegation Tokens.
Authenticator wrapper that enhances an Authenticator with
Delegation Token support.Delete operations.
Order of the destinations when we have multiple of them.
Stream for reading inotify events.
This is for counting distributed file system operations.
Diagnostic keys in the manifests.
This class encapsulates a codec which can decompress direct bytebuffers.
Specification of a direct ByteBuffer 'de-compressor'.
A directory entry in the task manifest.
Results returned by the RPC layer of DiskBalancer.
Various result values.
Event Dispatcher interface.
DistCp is the main driver-class for DistCpV2.
The Options class encapsulates all DistCp options.
File attributes for preserve.
Enumeration mapping configuration keys to distcp command line
options.
Deprecated.
DNS Operations.
DNS Implementation type.
An interface that must be implemented to allow pluggable
DNS-name/IP-address to RackID resolvers.
This class implements a value aggregator that sums up a sequence of double
values.
This class implements a value aggregator that sums up a sequence of double
values.
Writable for Double values.
Constants used in both Client and Application Master
A duration with logging of final state at info or debug
in the
close() call.Summary of duration tracking statistics
as extracted from an IOStatistics instance.
Implements a dynamic Bloom filter, as defined in the INFOCOM 2006 paper.
Get statistics pertaining to blocks of type
BlockType.STRIPED
in the filesystem.Erasure coding schema to housekeeper relevant information.
This is a simple ByteBufferPool which just creates ByteBuffers as needed.
Enum EncryptionType to represent the level of encryption applied.
A simple class for representing an encryption zone.
Protobuf enum
hadoop.hdfs.ReencryptActionProtoProtobuf enum
hadoop.hdfs.ReencryptionStateProtoDescription of a single service/component endpoint.
A Writable wrapper for EnumSet.
Enum representing POSIX errno values.
Events sent by the inotify system.
Interface defining events api.
Sent when an existing file is opened for append.
Sent when a file is closed after append or create.
Sent when a new file is created (including overwrite).
Sent when there is an update to directory or file (none of the metadata
tracked here applies to symlinks) that is not associated with another
inotify event.
Sent when a file, directory, or symlink is renamed.
Sent when a file is truncated.
Sent when a file, directory, or symlink is deleted.
A batch of events that all happened on the same transaction ID.
Interface for handling events of type T
avro encoding format supported by EventWriter.
Container property encoding execution semantics.
An object of this class represents a specification of the execution
guarantee of the Containers associated with a ResourceRequest.
Exit status - The values associated with each exit status is directly mapped
to the process's exit code in command line.
The request sent by the client to the
ResourceManager
to fail an application attempt.The response sent by the
ResourceManager to the client
failing an application attempt.Exception thrown to indicate service failover has failed.
Namenode state in the federation.
A fencing method is a method by which one node can forcibly prevent
another node from making continued progress.
This class implements a mapper/reducer class that can be used to perform
field selections in a manner similar to unix cut.
This class implements a mapper class that can be used to perform
field selections in a manner similar to unix cut.
This class implements a mapper/reducer class that can be used to perform
field selections in a manner similar to unix cut.
This class implements a reducer class that can be used to perform field
selections in a manner similar to unix cut.
Used when target file already exists for any operation and
is not configured to be overwritten.
Used when target file already exists for any operation and
is not configured to be overwritten.
An abstract class representing file checksums for files.
The FileContext class provides an interface for users of the Hadoop
file system.
A base class for file-based
InputFormat.A base class for file-based
InputFormats.Deprecated.
Deprecated.
Lists the types of file system operations.
An
OutputCommitter that commits files specified
in job output directory i.e.An
OutputCommitter that commits files specified
in job output directory i.e.A base class for
OutputFormat.A base class for
OutputFormats that read from FileSystems.Deprecated.
Deprecated.
This class is used to represent provided blocks that are file regions,
i.e., can be described using (path, offset, length).
A metrics sink that writes to a file
A section of an input file.
A section of an input file.
Interface that represents the client side information for a file.
Flags for entity attributes.
An abstract base class for a fairly generic filesystem.
Responsible to keep all the Azure Blob File System related configurations.
Create FSImage from an external namespace.
Thrown when an unhandled exception is occurred during a file system operation.
Common statistic names for Filesystem-level statistics,
including internals.
Responsible to keep all Azure Blob File System valid URI schemes.
Enum for file types.
A collection of file-processing util methods
A
FilterFileSystem contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.FilterOutputFormat is a convenience class that wraps OutputFormat.
FilterOutputFormat is a convenience class that wraps OutputFormat.
Enumeration of various final states of an
Application.The finalization request sent by the
ApplicationMaster to
inform the ResourceManager about its completion.The response sent by the
ResourceManager to a
ApplicationMaster on it's completion.Resolver mapping all files to a configurable, uniform blocksize
and replication.
Resolver mapping all files to a configurable, uniform blocksize.
FixedLengthInputFormat is an input format used to read input files
which contain fixed length records.
FixedLengthInputFormat is an input format used to read input files
which contain fixed length records.
Implement DBSplitter over floating-point values.
A WritableComparable for floats.
Entity that represents a record for flow activity.
This entity represents a flow run.
File system actions, e.g. read, write, etc.
The base interface which various FileSystem FileContext Builder
interfaces can extend, and which underlying implementations
will then implement.
FileSystem related constants.
A class that stores both masked and unmasked create modes
and is a drop-in replacement for masked permission.
Utility that wraps a
FSInputStream in a DataInputStream
and buffers input through a BufferedInputStream.Utility that wraps a
OutputStream in a DataOutputStream.Builder for
FSDataOutputStream and its subclasses.Thrown for unexpected filesystem errors, presumed to reflect disk errors
in the native filesystem.
Supported section name.
Protobuf enum
hadoop.hdfs.fsimage.INodeSection.INode.TypeProtobuf enum
hadoop.hdfs.fsimage.SnapshotDiffSection.DiffEntry.TypeFSInputStream is a generic old InputStream with a little bit
of RAF-style seek ability.
A class for file/directory permissions.
Protobuf enum
hadoop.fs.FileStatusProto.FileTypeProtobuf enum
hadoop.fs.FileStatusProto.FlagsProvides server default configuration values to clients.
This class is used to represent the capacity, free and used space on a
FileSystem.Store Type enum to hold label and attribute.
Traversal of an external FileSystem.
Dynamically assign ids to users/groups as they appear in the external
filesystem.
Filter for block file names stored on the file system volumes.
A class to wrap a
Throwable into a Runtime Exception.
A
FileSystem backed by an FTP client provided by Apache Commons Net.Builder for input streams and subclasses whose return value is
actually a completable future: this allows for better asynchronous
operation.
Builder for input streams and subclasses whose return value is
actually a completable future: this allows for better asynchronous
operation.
Future IO Helper methods.
A wrapper for Writable instances.
Request class for getting all the resource profiles from the RM.
Response class for getting all the resource profiles from the RM.
Request class for getting all the resource profiles from the RM.
Response class for getting all the resource profiles from the RM.
The request sent by a client to the
ResourceManager to get an
ApplicationAttemptReport for an application attempt.
The response sent by the
ResourceManager to a client requesting
an application attempt report.
The request from clients to get a list of application attempt reports of an
application from the
ResourceManager.
The response sent by the
ResourceManager to a client requesting
a list of ApplicationAttemptReport for application attempts.The request sent by a client to the
ResourceManager to
get an ApplicationReport for an application.The response sent by the
ResourceManager to a client
requesting an application report.The request from clients to get a report of Applications
in the cluster from the
ResourceManager.The response sent by the
ResourceManager to a client
requesting an ApplicationReport for applications.
The request from clients to get node to attribute value mapping for all or
given set of Node AttributeKey's in the cluster from the
ResourceManager.
The response sent by the
ResourceManager to a client requesting
node to attribute value mapping for all or given set of Node AttributeKey's.The request sent by clients to get cluster metrics from the
ResourceManager.The response sent by the
ResourceManager to a client
requesting cluster metrics.
The request from clients to get node attributes in the cluster from the
ResourceManager.
The response sent by the
ResourceManager to a client requesting
a node attributes in cluster.The request from clients to get a report of all nodes
in the cluster from the
ResourceManager.The response sent by the
ResourceManager to a client
requesting a NodeReport for all nodes.
The request sent by a client to the
ResourceManager to get an
ContainerReport for a container.
The response sent by the
ResourceManager to a client requesting
a container report.
The request from clients to get a list of container reports, which belong to
an application attempt from the
ResourceManager.
The response sent by the
ResourceManager to a client requesting
a list of ContainerReport for containers.The request sent by the
ApplicationMaster to the
NodeManager to get ContainerStatus of requested
containers.The response sent by the
NodeManager to the
ApplicationMaster when asked to obtain the
ContainerStatus of requested containers.The request issued by the client to get a delegation token from
the
ResourceManager.Response to a
GetDelegationTokenRequest request
from the client.The request sent by an application master to the node manager to get
LocalizationStatuses of containers.The response sent by the node manager to an application master when
localization statuses are requested.
The request sent by clients to get a new
ApplicationId for
submitting an application.The response sent by the
ResourceManager to the client for
a request to get a new ApplicationId for submitting applications.The request sent by clients to get a new
ReservationId for
submitting an reservation.The response sent by the
ResourceManager to the client for
a request to get a new ReservationId for submitting reservations.
The request from clients to get nodes to attributes mapping
in the cluster from the
ResourceManager.
The response sent by the
ResourceManager to a client requesting
nodes to attributes mapping.Get operations.
The request sent by clients to get queue information
from the
ResourceManager.The response sent by the
ResourceManager to a client
requesting information about queues in the system.The request sent by clients to the
ResourceManager to
get queue acls for the current user.The response sent by the
ResourceManager to clients
seeking queue acls for the user.Request class for getting the details for a particular resource profile.
Response class for getting the details for a particular resource profile.
Stores global storage statistics objects.
A filter for POSIX glob pattern with brace expansions.
A glob pattern filter for metrics.
A metrics sink that writes to a Graphite server.
An interface for the implementation of a user-to-groups mapping service
used by
Groups.GCS implementation of AbstractFileSystem.
This class creates gzip compressors/decompressors.
Indicates that a method has been passed illegal or invalid argument.
This class is intended to be installed by calling
Thread.setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler)
in the main entry point.Protocol interface that provides High Availability related primitives to
monitor and fail-over the service.
An HA service may be in active or standby state.
Helper for making
HAServiceProtocol RPC calls.Protobuf enum
hadoop.common.HARequestSourceProtobuf enum
hadoop.common.HAServiceStateProtoRepresents a target of the client side HA administration commands.
Implements a hash object that returns a certain number of hashed values.
Partition keys by their
Object.hashCode().Partition keys by their
Object.hashCode().The public API for performing administrative functions on HDFS.
Extension of
AuditLogger.Re-encrypt encryption zone actions.
This enum wraps above Storage Policy ID and name.
Storage policy satisfier service modes.
Upgrade actions.
The Hdfs implementation of
FSDataInputStream.The Hdfs implementation of
FSDataOutputStream.Set of features potentially active on an instance.
File access permissions mode.
Protobuf enum
hadoop.hdfs.BlockChecksumTypeProto
Types of recognized blocks.
Checksum algorithms/types used in HDFS
Make sure this enum's integer values match enum values' id properties defined
in org.apache.hadoop.util.DataChecksum.Type
Cipher suite.
Crypto protocol version used to access encrypted files.
Protobuf enum
hadoop.hdfs.DatanodeInfoProto.AdminStateProtobuf enum
hadoop.hdfs.DatanodeStorageProto.StorageState
EC policy state.
Protobuf enum
hadoop.hdfs.HdfsFileStatusProto.FileTypeProtobuf enum
hadoop.hdfs.HdfsFileStatusProto.Flags
Types of recognized storage media.
States, which a block can go through while it is under construction.
Defines the NameNode role.
Type of the node
Block replica states, which it can go through while being constructed.
Startup options for rolling upgrade.
Startup options
Protobuf enum
hadoop.hdfs.MountTableRecordProto.DestOrderProtobuf enum
hadoop.hdfs.NamenodeCommandProto.TypeProtobuf enum
hadoop.hdfs.NamenodeRegistrationProto.NamenodeRoleProtoProtobuf enum
hadoop.hdfs.NNHAStatusHeartbeatProto.State
State of a block replica at a datanode
The public utility API for HDFS.
Exception thrown to indicate that health check of a service failed.
This class extends timeline entity and defines parent-child relationships
with other entities.
This class provides a way to interact with history files in a thread safe
manor.
Provides an API to query jobs that have finished.
Responsible to keep all abfs http headers here.
Http operation types
Responsible to keep all Http Query params here.
The X-FRAME-OPTIONS header in HTTP response to mitigate clickjacking
attack.
This is an IAM credential provider which wraps
an
ContainerCredentialsProvider
to provide credentials when the S3A connector is instantiated on AWS EC2
or the AWS container services.A general identifier, which internally stores the id
as an integer.
A general identifier, which internally stores the id
as an integer.
Implements the identity function, mapping inputs directly to outputs.
Performs no reduction, writing all input values directly to the output.
An interface for the implementation of <userId,
userName> mapping and <groupId, groupName>
mapping.
Any key type that is comparable at native side must implement this interface.
an INativeSerializer serializes and deserializes data transferred between
Java and native.
The request sent by
Application Master to the
Node Manager to change the resource quota of a container.
The response sent by the
NodeManager to the
ApplicationMaster when asked to increase container resource.Deprecated.
Replaced by Avro.
InMemoryAliasMap is an implementation of the InMemoryAliasMapProtocol for
use with LevelDB.
Full inner join.
Full inner join.
Protobuf enum
hadoop.hdfs.EventTypeProtobuf enum
hadoop.hdfs.INodeTypeProtobuf enum
hadoop.hdfs.MetadataUpdateTypeInputFormat describes the input-specification for a
Map-Reduce job.InputFormat describes the input-specification for a
Map-Reduce job.Utility for collecting samples and writing a partition file for
TotalOrderPartitioner.InputSplit represents the data to be processed by an
individual Mapper.InputSplit represents the data to be processed by an
individual Mapper.Enum of input stream types.
An (extensible) enum of kinds of instantiation failure.
Integer configuration value Validator.
Implement DBSplitter over integer values.
Annotation to inform users of a package, class or method's intended audience.
Annotation to inform users of how much to rely on a particular package,
class or method not changing over time.
Helpers to create interned metrics info.
A WritableComparable for ints.
Exception to wrap invalid Azure service error responses and exceptions
raised on network IO.
Thrown when there is an attempt to perform an invalid operation on an ACL.
Thrown when a configuration value is invalid
Thrown when a file system property is invalid.
Used when file type differs from the desired file type. like
getting a file when a directory is expected.
Exception thrown when an invalid ingress service is encountered.
This class wraps a list of problems with the input, so that the user
can get a list of problems together instead of finding and fixing them one
by one.
This class wraps a list of problems with the input, so that the user
can get a list of problems together instead of finding and fixing them one
by one.
This exception is thrown when jobconf misses some mendatory attributes
or value of some attributes is invalid.
Path string is invalid either because it has invalid characters or due to
other file system specific reasons.
Thrown when the constraints enoded in a
PathHandle do not hold.A path name was invalid.
Raised if an attempt to parse a record failed.
The exception that happens when you call invalid state transition.
Deprecated.
Use
InvalidStateTransitionException instead.Thrown when URI authority is invalid.
Thrown when URI is invalid.
A
Mapper that swaps keys and values.A
Mapper that swaps keys and values.IO Statistics.
Interface exported by classes which support
aggregation of
IOStatistics.Utility operations convert IO Statistics sources/instances
to strings, especially for robustly logging.
Setter for IOStatistics entries.
Snapshot of statistics from a different source.
Support for working with IOStatistics.
An utility class for I/O related functionality.
An experimental
Serialization for Java Serializable classes.
A
RawComparator that uses a JavaSerialization
Deserializer to deserialize objects that are then compared via
their Comparable interfaces.Contains utility methods and constants relating to Jetty.
The job submitter's view of the Job.
JobClient is the primary interface for the user-job to interact
with the cluster.A map/reduce job configuration.
That what may be configured.
A read-only view of the job that is provided to the tasks while they
are running.
This class encapsulates a set of MapReduce jobs and its dependency.
Event types handled by Job.
JobID represents the immutable and unique identifier for
the job.
JobID represents the immutable and unique identifier for
the job.
Deprecated.
Provided for compatibility.
Used to describe the priority of the running job.
Used to describe the priority of the running job.
Class that contains the information regarding the Job Queues which are
maintained by the Hadoop Map/Reduce framework.
Describes the current status of a job.
Describes the current status of a job.
Current state of the job
State is no longer used since M/R 2.x.Base class for Composite joins returning Tuples of arbitrary Writables.
Base class for Composite joins returning Tuples of arbitrary Writables.
This is the JMX management interface for JournalNode information
A metrics sink that writes to a Kafka broker.
Thrown when
UserGroupInformation failed with an unrecoverable error,
such as failure in kerberos login/logout, invalid subject etc.The
KerberosDelegationTokenAuthenticator provides support for
Kerberos SPNEGO authentication mechanism and support for Hadoop Delegation
Token operations.The kerberos principal of the service.
This comparator implementation provides a subset of the features provided
by the Unix/GNU Sort.
This comparator implementation provides a subset of the features provided
by the Unix/GNU Sort.
Defines a way to partition keys based on certain key fields (also see
KeyFieldBasedComparator.Defines a way to partition keys based on certain key fields (also see
KeyFieldBasedComparator.A provider of secret key material for Hadoop applications.
A factory to create a list of KeyProvider based on the path given in a
Configuration.
This class treats a line in the input as a key/value pair separated by a
separator character.
This class treats a line in the input as a key/value pair separated by a
separator character.
An
InputFormat for plain text files.An
InputFormat for plain text files.The request sent by the client to the
ResourceManager
to abort a submitted application.The response sent by the
ResourceManager to the client aborting
a submitted application.An interface which services can implement to have their
execution managed by the ServiceLauncher.
Common Exit codes.
Enums for features that change the layout version before rolling
upgrade is supported.
A Convenience class that creates output lazily.
A Convenience class that creates output lazily.
A LevelDB based implementation of
BlockAliasMap.A serializable lifecycle event: the time a state
transition occurred, and what state was entered.
Implement the FileSystem API for the checksumed local filesystem.
State of localization.
Represents the localization status of a resource.
The status of localization.
LocalResource represents a local resource required to
run a container.LocalResourceType specifies the type
of a resource localized by the NodeManager.LocalResourceVisibility specifies the visibility
of a resource localized by the NodeManager.This class defines a FileStatus that includes a file's block locations.
LogAggregationContext represents all of the
information needed by the NodeManager to handle
the logs for an application.Base class to implement Log Aggregation File Controller.
Status of Log aggregation.
Enumeration of log levels.
This is a state change listener that logs events at INFO level
A
Reducer that sums long values.This class implements a value aggregator that maintain the maximum of
a sequence of long values.
This class implements a value aggregator that maintain the maximum of
a sequence of long values.
This class implements a value aggregator that maintain the minimum of
a sequence of long values.
This class implements a value aggregator that maintain the minimum of
a sequence of long values.
This class implements a value aggregator that sums up
a sequence of long values.
This class implements a value aggregator that sums up
a sequence of long values.
A WritableComparable for longs.
This is a dedicated committer which requires the "magic" directory feature
of the S3A Filesystem to be enabled; it then uses paths for task and job
attempts in magic paths, so as to ensure that the final output goes direct
to the destination directory.
This is the Intermediate-Manifest committer.
Public constants for the manifest committer.
This is the committer factory to register as the source of committers
for the job/filesystem schema.
Statistic names for committers.
Summary data saved into a
_SUCCESS marker file.The context that is given to the
Mapper.A file-based map from keys to values.
An
OutputFormat that writes MapFiles.An
OutputFormat that writes
MapFiles.Maps input key/value pairs to a set of intermediate key/value pairs.
Maps input key/value pairs to a set of intermediate key/value pairs.
Expert: Generic interface for
Mappers.Default
MapRunnable implementation.A Writable Map.
MarkableIterator is a wrapper iterator class that
implements the MarkableIteratorInterface.Enumeration of credential types for use in validation methods.
This util class provides a method to register an MBean using
our standard naming convention as described in the doc
for {link
MBeans.register(String, String, Object).A Writable for MD5 hash values.
A mean statistic represented as the sum and the sample count;
the mean is calculated on demand.
Exception - Meta Block with the same name already exists.
Exception - No such Meta Block with the given name.
Annotation interface for a single metric used to annotate a field or a method
in the class.
Annotation interface for a group of metrics
A metrics cache for sinks that don't support sparse updates.
The metrics collector interface
A general metrics exception wrapper
The metrics filter interface.
Interface to provide immutable metainfo for metrics.
Build a JSON dump of the metrics.
The plugin interface for the metrics framework
An immutable snapshot of metrics with a timestamp
The metrics record builder interface
An optional metrics registry class for creating and maintaining a
collection of MetricsMutables, making writing metrics source easier.
The metrics sink interface.
The source of metrics information.
The metrics system interface.
The JMX interface to the metrics system
Immutable tag for metrics (for grouping on host/queue/username etc.)
Build a string dump of the metrics.
A visitor interface for metrics
A monotonic clock from some arbitrary time base in the past, counting in
milliseconds, and not affected by settimeofday or similar system clock
changes.
Mount procedures
The request sent by the client to the
ResourceManager
to move a submitted application to a different queue.
The response sent by the
ResourceManager to the client moving
a submitted application to a different queue.An abstract
InputFormat that returns MultiFileSplit's
in MultiFileInputFormat.getSplits(JobConf, int) method.A sub-collection of input files.
Base class for Composite join returning values derived from multiple
sources, but generally not tuples.
Base class for Composite join returning values derived from multiple
sources, but generally not tuples.
Exception raised in
S3AFileSystem.deleteObjects(software.amazon.awssdk.services.s3.model.DeleteObjectsRequest) when
one or more of the keys could not be deleted.MultipartUploader is an interface for copying files multipart and across
multiple nodes.
MultipartUploaderBuilderImpl<S extends MultipartUploader,B extends org.apache.hadoop.fs.MultipartUploaderBuilder<S,B>>
Builder for
MultipartUploader implementations.Hook for Transition.
This class supports MapReduce jobs that have multiple input paths with
a different
InputFormat and Mapper for each pathThis class supports MapReduce jobs that have multiple input paths with
a different
InputFormat and Mapper for each pathEncapsulate a list of
IOException into an IOExceptionThis abstract class extends the FileOutputFormat, allowing to write the
output data to different output files.
The MultipleOutputs class simplifies writing to additional outputs other
than the job default output via the
OutputCollector passed to
the map() and reduce() methods of the
Mapper and Reducer implementations.The MultipleOutputs class simplifies writing output data
to multiple outputs
This class extends the MultipleOutputFormat, allowing to write the output data
to different output files in sequence file output format.
This class extends the MultipleOutputFormat, allowing to write the output
data to different output files in Text output format.
Multithreaded implementation for @link org.apache.hadoop.mapreduce.Mapper.
Multithreaded implementation for
MapRunnable.The mutable counter (monotonically increasing) metric interface
A mutable int counter for implementing metrics sources
A mutable long counter
The mutable gauge metric interface
A mutable int gauge
A mutable long gauge
Watches a stream of long values, maintaining online estimates of specific
quantiles with provably low error bounds.
The mutable metric interface
Watches a stream of long values, maintaining online estimates of specific
quantiles with provably low error bounds.
A convenient mutable metric for throughput measurement
Helper class to manage a group of mutable rate metrics
This class synchronizes all accesses to the metrics it
contains, so it should not be used in situations where
there is high contention on the metrics.
Helper class to manage a group of mutable rate metrics.
This class maintains a group of rolling average metrics.
A mutable metric with stats.
A RecordReader that reads records from a MySQL table via DataDrivenDBRecordReader
A RecordReader that reads records from a MySQL table.
Categories of operations supported by the namenode.
Enums for features that change the layout version.
Deprecated.
Keeps the support state of PMDK.
Supported list of Windows access right flags
Write call flavors
Class encapsulates different types of files
NLineInputFormat which splits N lines of input as one split.
NLineInputFormat which splits N lines of input as one split.
NMClientAsync handles communication with all the NodeManagers
and provides asynchronous updates on getting responses from them.The type of the event of interacting with a container
The NMToken is used for authenticating communication with
NodeManagerNMTokenCache manages NMTokens required for an Application Master
communicating with individual NodeManagers.
Implementation of StorageDirType specific to namenode storage
A Storage directory could be of type IMAGE which stores only fsimage,
or of type EDITS which stores edits or of type IMAGE_AND_EDITS which
stores both fsimage and edits.
The filenames used for storing the images.
This is a manifestation of the Zookeeper restrictions about
what nodes may act as parents.
Node Attribute is a kind of a label which represents one of the
attribute/feature of a Node.
Node Attribute Info describes a NodeAttribute.
Node AttributeKey uniquely identifies a given Node Attribute.
Enumeration of various node attribute op codes.
Type of a
node Attribute.NodeId is the unique identifier for a node.NodeReport is a summary of runtime information of a node
in the cluster.State of a
Node.
Mapping of Attribute Value to a Node.
Taxonomy of the
NodeState that a
Node might transition into.Raised if there is no
ServiceRecord resolved at the end
of the specified path.NotInMountpointException extends the UnsupportedOperationException.
Indicates the S3 object does not provide the versioning attribute required
by the configured change detection policy.
Null sink for region information emitted from FSImage.
Consume all outputs and put them in /dev/null.
Consume all outputs and put them in /dev/null.
Singleton Writable with no data.
Configure a connection to use OAuth2 authentication.
Sundry constants relating to OAuth2 within WebHDFS.
A polymorphic Writable that writes an instance with it's class name.
OBS implementation of AbstractFileSystem, which delegates to the
OBSFileSystem.Open file types to filter the results.
Little duration counter.
This class contains options related to file system operations.
Enum for indicating what mode to use when combining chunk and block
checksums to define an aggregate FileChecksum.
The standard
createFile() options.The standard
openFile() options.Enum to support the varargs for rename() options
A InputFormat that reads input data from an SQL table in an Oracle db.
A RecordReader that reads records from a Oracle table via DataDrivenDBRecordReader
Implement DBSplitter over date/time values returned by an Oracle db.
A RecordReader that reads records from an Oracle SQL table.
Full outer join.
Full outer join.
OutputCommitter describes the commit of task output for a
Map-Reduce job.OutputCommitter describes the commit of task output for a
Map-Reduce job.OutputFormat describes the output-specification for a
Map-Reduce job.OutputFormat describes the output-specification for a
Map-Reduce job.This class filters log files from directory given
It doesnt accept paths having _logs.
Prefer the "rightmost" data source for this key.
Prefer the "rightmost" data source for this key.
Indicates that the parent of specified Path is not a directory
as expected.
Very simple shift-reduce parser for join expressions.
Very simple shift-reduce parser for join expressions.
Tagged-union type for tokens from the join expression.
Tagged-union type for tokens from the join expression.
Opaque, serializable reference to a part id for multipart uploads.
An
OutputCommitter that commits files specified
in job output directory i.e.A partial listing of the children of a parent directory.
Interface for an
OutputCommitter
implementing partial commit of task output, as during preemption.Partitions the key space.
Partitions the key space.
This is a special codec which does not transform the output.
Names a file or directory in a
FileSystem.Opaque, serializable reference to an entity in the FileSystem.
A committer which somehow commits data written to a working directory
to the final directory during the commit process.
A factory for committers implementing the
PathOutputCommitter
methods, and so can be used from FileOutputFormat.PlacementConstraint represents a placement constraint for a resource
allocation.Placement constraint details.
Enum specifying the type of the target expression.
The unit of scheduling delay.
Placement constraint expression parser.
This class contains various static methods for the applications to create
placement constraints (see also
PlacementConstraint).Advanced placement policy of the components of a service.
The scope of placement for the containers of a component.
The type of placement - affinity/anti-affinity/affinity-with-cardinality with
containers of another component or containers of the same component (self).
Base class for platforms.
Stream that permits positional reading.
Post operations.
This enum contains some of the values commonly used by history log events.
Specific container requested back by the
ResourceManager.Description of resources requested back by the
ResourceManager.A
PreemptionMessage is part of the RM-AM protocol, and it is used by
the RM to specify resources that the RM wants to reclaim from this
ApplicationMaster (AM).Description of resources requested back by the cluster.
The priority assigned to a ResourceRequest or Application or Container
allocation
The different stages to track the time of.
A facility for reporting progress.
Enum for progress listener events.
some common protocol types
The
PseudoDelegationTokenAuthenticator provides support for
Hadoop's pseudo authentication mechanism that accepts
the user name specified as a query string parameter and support for Hadoop
Delegation Token operations.A pure-java implementation of the CRC32 checksum that uses
the same polynomial as the built-in native CRC32.
A pure-java implementation of the CRC32 checksum that uses
the CRC32-C polynomial, the same polynomial used by iSCSI
and implemented on many Intel chipsets supporting SSE4.2.
Put operations.
QueueACL enumerates the various ACLs for queues.Class to encapsulate Queue ACLs for a particular
user.
This entity represents a queue.
Class that contains the information regarding the Job Queues which are
maintained by the Hadoop Map/Reduce framework.
QueueInfo is a report of the runtime information of the queue.
Enum representing queue state
State of a Queue.
QueueUserACLInfo provides information QueueACL for
the given user.Quota types.
This exception is thrown when modification to HDFS results in violation
of a directory quota.
Store the quota usage of a directory.
Interface for objects that can be compared through
RawComparator.
A
Comparator that operates directly on byte representations of
objects.Implement the FileSystem API for the raw local filesystem.
The ReadBufferStatus for Rest AbfsClient
A custom command or a pluggable helper container to determine the readiness
of a container of a component.
Type.
Options that can be used when reading from a FileSystem.
Enumeration for different types of read operations triggered by AbfsInputStream.
Protocol that clients use to communicate with the NN/DN to do
reconfiguration on the fly.
Deprecated.
Replaced by Avro.
Deprecated.
Replaced by Avro.
Deprecated.
Replaced by Avro.
Deprecated.
Replaced by Avro.
RecordReader reads <key, value> pairs from an
InputSplit.The record reader breaks the data into key/value pairs for input to the
Mapper.RecordWriter writes the output <key, value> pairs
to an output file.RecordWriter writes the output <key, value> pairs
to an output file.The context passed to the
Reducer.Reduces a set of intermediate values which share a key to a smaller set of
values.
Reduces a set of intermediate values which share a key to a smaller set of
values.
General reflection utils
A regex pattern filter for metrics
A
Mapper that extracts text matching a regular expression.A
Mapper that extracts text matching a regular expression.RegexMountPointInterceptorType.
The request sent by the
ApplicationMaster to ResourceManager
on registration.The response sent by the
ResourceManager to a new
ApplicationMaster on registration.Policy to purge entries
Interface which can be implemented by a registry binding source
Constants for the registry, including configuration keys and default
values.
Base exception for registry operations.
Registry Operations
This is the client service for applications to work with the registry.
The Registry operations service.
Output of a
RegistryOperations.stat() callStatic methods to work with registry types —primarily endpoints and the
list representation of addresses.
Utility methods for working with a registry.
This partitioner rehashes values returned by
Object.hashCode()
to get smoother distribution between partitions which may improve
reduce reduce time in some cases and should harm things in no cases.This encapsulates all the required fields needed for a Container
ReInitialization.
The response to the
ReInitializeContainerRequest.This encapsulates a Rejected SchedulingRequest.
Reason for rejecting a Scheduling Request.
The request from clients to release a resource in the shared cache.
The response to clients from the
SharedCacheManager when
releasing a resource in the shared cache.Indicates the S3 object is out of sync with the expected version.
A set of remote iterators supporting transformation and filtering,
with IOStatisticsSource passthrough, and of conversions of
the iterators to lists/arrays and of performing actions
on the values.
Defines the different remove scheme for retouched Bloom filters.
The request issued by the client to renew a delegation token from
the
ResourceManager.The response to a renewDelegationToken call to the
ResourceManager.The replacement policies
The public API for ReplicaAccessor objects.
The public API for creating a new ReplicaAccessor.
Get statistics pertaining to blocks of type
BlockType.CONTIGUOUS
in the filesystem.A facility for Map-Reduce applications to report progress and update
counters, status information etc.
ReservationACL enumerates the various ACLs for reservations.ReservationAllocationState represents the reservation that is
made by a user.ReservationDefinition captures the set of resource and time
constraints the user cares about regarding a reservation.ReservationDeleteRequest captures the set of requirements the user
has to delete an existing reservation.ReservationDeleteResponse contains the answer of the admission
control system in the ResourceManager to a reservation delete
operation.ReservationId represents the globally unique identifier for
a reservation.ReservationListRequest captures the set of requirements the
user has to list reservations.ReservationListResponse captures the list of reservations that the
user has queried.ReservationRequest represents the request made by an application to
the ResourceManager to reserve Resources.Enumeration of various types of dependencies among multiple
ReservationRequests within one ReservationDefinition (from
least constraining to most constraining).ReservationRequests captures the set of resource and constraints the
user cares about regarding a reservation.ReservationSubmissionRequest captures the set of requirements the
user has to create a reservation.The response sent by the
ResourceManager to a client on
reservation submission.ReservationUpdateRequest captures the set of requirements the user
has to update an existing reservation.ReservationUpdateResponse contains the answer of the admission
control system in the ResourceManager to a reservation update
operation.This defines an interface to a stateful Iterator that can replay elements
added to it directly.
This defines an interface to a stateful Iterator that can replay elements
added to it directly.
Resource models a set of computer resources in the
cluster.Resource determines the amount of resources (vcores, memory, network, etc.)
ResourceAllocationRequest represents an allocation
made for a reservation for the current state of the plan.ResourceBlacklistRequest encapsulates the list of resource-names
which should be added or removed from the blacklist of resources
for the application.Interface class to obtain process resource usage
NOTE: This class should not be used by external users, but only by external
developers to extend and include their own process-tree implementation,
especially for platforms other than Linux and Windows.
The request sent by the ApplicationMaster to ask for localizing resources.
The response to the
ResourceLocalizationRequestThis exception is thrown when details of an unknown resource type
are requested.
ResourceRequest represents the request made
by an application to the ResourceManager
to obtain various Container allocations.Class to construct instances of
ResourceRequest with specific
options.ResourceSizing contains information for the size of a
SchedulingRequest, such as the number of requested allocations and
the resources for each allocation.Enum which represents the resource type.
ResourceUtilization models the utilization of a set of computer
resources in the cluster.The response to a restart Container request.
This filter provides protection against cross site request forgery (CSRF)
attacks for REST APIs.
Implements a retouched Bloom filter, as defined in the CoNEXT 2006 paper.
Enum for retry values.
Delegation Token Identifier that identifies the delegation tokens from the
Resource Manager.
Effect options.
Response to a Rollback request.
This class is a metrics sink that uses
FileSystem to write the metrics logs.States of the Router.
Different types of authentication as defined in RFC 1831
RpcKind determine the rpcEngine and the serialization of the rpc request
Protobuf enum
hadoop.common.RpcRequestHeaderProto.OperationProtoProtobuf enum
hadoop.common.RpcResponseHeaderProto.RpcErrorCodeProtoProtobuf enum
hadoop.common.RpcResponseHeaderProto.RpcStatusProtoProtobuf enum
hadoop.common.RpcSaslProto.SaslStateMessage type
RPC reply_stat as defined in RFC 1831
RunningJob is the user-interface to query for details on a
running Map-Reduce job.This lock mode is used for FGL.
S3A implementation of AbstractFileSystem.
How will tokens be issued on request?
This enum is to centralize the encryption methods and
the value required in the configuration.
Class to help parse AWS S3 Logs.
An identical copy from org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction, that helps
the other file system implementation to define
SafeMode.Authentication method
CLI for modifying scheduler configuration.
SchedulingRequest represents a request made by an application to the
ResourceManager to obtain an allocation.Class to construct instances of
SchedulingRequest with specific
options.This class implements the
DNSToSwitchMapping interface using a
script configured via the
CommonConfigurationKeysPublic.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY option.The server-side secret manager for each token type.
Security Utils.
Stream that permits seeking.
Deprecated.
SequenceFiles are flat files consisting of binary key/value
pairs.The compression type used to compress key/value pairs in the
SequenceFile.InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
An
OutputFormat that writes keys, values to
SequenceFiles in binary(raw) formatAn
OutputFormat that writes keys,
values to SequenceFiles in binary(raw) formatThis class is similar to SequenceFileInputFormat,
except it generates SequenceFileAsTextRecordReader
which converts the input keys and values to their
String forms by calling toString() method.
This class is similar to SequenceFileInputFormat, except it generates
SequenceFileAsTextRecordReader which converts the input keys and values
to their String forms by calling toString() method.
This class converts the input keys and values to their String forms by calling toString()
method.
This class converts the input keys and values to their String forms by
calling toString() method.
A class that allows a map/red job to work on a sample of sequence files.
A class that allows a map/red job to work on a sample of sequence files.
An
InputFormat for SequenceFiles.An
InputFormat for SequenceFiles.An
OutputFormat that writes SequenceFiles.An
OutputFormat that writes SequenceFiles.An
RecordReader for SequenceFiles.An
RecordReader for SequenceFiles.Manage name-to-serial-number maps for various string tables.
An abstract IPC service.
Helpers to handle server addresses
Service LifeCycle.
An Service resource has the following attributes.
Service states
This class defines constants that can be used in input spec for
variable substitutions
Types of
ServiceEvent.Exception thrown to indicate that an operation performed
to modify the state of a service or application failed.
A service launch exception that includes an exit code.
State of
ServiceManager.This class contains a set of methods to work with services, especially
to walk them through their lifecycle.
JSON-marshallable description of a single component.
The current state of an service.
Interface to notify state changes of a service.
Exception that can be raised on state change operations, whose
exit code can be explicitly set, determined from that of any nested
cause, or a default value of
LauncherExitCodes.EXIT_SERVICE_LIFECYCLE_EXCEPTION.Implements the service state model.
The current status of a submitted service, returned as a response to the
GET API.
Slider entities that are published to ATS.
Events that are used to store in ATS.
A file-based set of keys.
This is the client for YARN's shared cache.
This credential provider has jittered between existing and non-existing,
but it turns up in documentation enough that it has been restored.
A base class for running a Shell command.
Enumeration of various signal container commands.
A WritableComparable for shorts.
The
ShutdownHookManager enables running shutdownHook
in a deterministic order, higher priority first.Enumeration of various signal container commands.
The request sent by the client to the
ResourceManager
or by the ApplicationMaster to the NodeManager
to signal a container.The response sent by the
ResourceManager to the client
signalling a container.Support simple credentials for authenticating with AWS.
Hook for Transition.
Map all owners/groups in external system to a single user in FSImage.
Sizes of binary values and other some common sizes.
Utility class for skip bad records functionality.
Lists the types of operations on which disk latencies are measured.
The type of trace in input.
Types of the difference, which include CREATE, MODIFY, DELETE, and RENAME.
Specialized SocketFactory to create sockets with a SOCKS proxy
A Writable SortedMap.
An InputStream covering a range of compressed data.
This interface is meant to be implemented by those compression codecs
which are capable to compress / de-compress a stream starting at any
arbitrary position.
During decompression, data can be read off from the decompressor in two
modes, namely continuous and blocked.
Configure a connection to use SSL authentication.
Specialized SocketFactory to create sockets with a SOCKS proxy
The request sent by the
ApplicationMaster to the
NodeManager to start a container.
The request which contains a list of
StartContainerRequest sent by
the ApplicationMaster to the NodeManager to
start containers.
The response sent by the
NodeManager to the
ApplicationMaster when asked to start an allocated
container.State machine topology.
Base implementation of a State Store driver.
Driver class for an implementation of a
StateStoreService
provider.State Store driver that stores a serialization of the records.
A State Transition Listener.
Statistic which are collected in S3A.
Enum for statistic types.
Enum of statistic types.
A metrics sink that writes metrics to a StatsD daemon.
The request sent by the
ApplicationMaster to the
NodeManager to stop containers.
The response sent by the
NodeManager to the
ApplicationMaster when asked to stop allocated
containers.StorageStatistics contains statistics data for a FileSystem or FileContext
instance.
Defines the types of supported storage media.
Class that maintains different forms of Storage Units.
Common statistic names for object store operations..
This class provides an implementation of ResetableIterator.
This class provides an implementation of ResetableIterator.
Interface to query streams for supported capabilities.
Deprecated.
Static methods to implement policies for
StreamCapabilities.Requirements a factory may have.
These are common statistic names.
Enumeration of particular allocations to be reclaimed.
Stringifier interface offers two methods to convert an object
to a string representation and restore the object given its
string representation.
Provides string interning utility methods.
The traditional binary prefixes, kilo, mega, ..., exa,
which can be represented by a 64-bit integer.
This class implements a value aggregator that maintain the biggest of
a sequence of strings.
This class implements a value aggregator that maintain the biggest of
a sequence of strings.
This class implements a value aggregator that maintain the smallest of
a sequence of strings.
This class implements a value aggregator that maintain the smallest of
a sequence of strings.
This entity represents a user defined entities to be stored under sub
application table.
The request sent by a client to submit an application to the
ResourceManager.The response sent by the
ResourceManager to a client on
application submission.The main entry point and job submitter.
AbstractFileSystem implementation for HDFS over the web (secure).
This is the interface for flush/sync operations.
Plugin to calculate resource information on the system.
Implementation of
Clock that gives the current time from the system
clock in milliseconds.
Simple
DNSToSwitchMapping implementation that reads a 2 column text
file.Deprecated.
Provided for compatibility.
The context for task attempts.
Event types handled by TaskAttempt.
TaskAttemptID represents the immutable and unique identifier for
a task attempt.
TaskAttemptID represents the immutable and unique identifier for
a task attempt.
This is used to track task completion events on
job tracker.
This is used to track task completion events on
job tracker.
Event types handled by Task.
TaskID represents the immutable and unique identifier for
a Map or Reduce Task.
TaskID represents the immutable and unique identifier for
a Map or Reduce Task.
A context object that allows input and output from the task.
A report on the state of a task.
Possible Task States.
Information about TaskTracker.
Define MaWo Task Type.
Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types.
Support session credentials for authenticating with AWS.
This class stores text using standard UTF8 encoding.
This class is used for block maps stored as text files,
with a specified delimiter.
An
InputFormat for plain text files.An
InputFormat for plain text files.An
OutputFormat that writes plain text files.An
OutputFormat that writes plain text files.Implement DBSplitter over text strings.
A TFile is a container of key-value pairs.
A client library that can be used to post some information in terms of a
number of conceptual entities.
The response of delegation token related request
This class contains the information about a timeline domain, which is used
to a user to host a number of timeline entities, isolating them from others'.
This class contains the information about a timeline service domain, which is
used to a user to host a number of timeline entities, isolating them from
others'.
The class that hosts a list of timeline domains.
The class that hosts a list of timeline entities.
This class hosts a set of timeline entities.
The class that contains the the meta information of some conceptual entity
and its related events.
The basic timeline entity data structure for timeline service v2.
TimelineEntityGroupId is an abstract way for
timeline service users to represent “a group of related timeline data.Defines type of entity.
The class that contains the information of an event that is related to some
conceptual entity of an application.
This class contains the information of an event that belongs to an entity.
The class that hosts a list of events, which are categorized according to
their related entities.
The class that hosts a list of events that are only related to one entity.
This class holds health information for ATS.
Timline health status.
This class contains the information of a metric that is related to some
entity.
Type of metric.
Aggregation operations.
A class that holds a list of put errors.
A class that holds the error code for one entity.
A client library that can be used to get Timeline Entities associated with
application, application attempt or containers.
Implementation of TimelineReaderClient interface.
The helper class for the timeline module.
A class that holds a list of put errors.
A class that holds the error code for one entity.
Thrown when a timeout happens.
Class for Timer Functionality.
The client-side form of the token.
Token is the security entity used by the framework
to verify authenticity of any resource.A trivial renewer for token kinds that aren't managed.
This class provides user facing APIs for transferring secrets from
the job client to the tasks.
Tokenize the input values and emit each word with a count of 1.
A
Mapper that maps text values into <token,freq> pairs.An identifier that identifies a token, may contain public information
about a token, including its kind (or type).
Indicates Token related information to be used
This is the interface for plugins that handle tokens.
Select token of type T from tokens for use with named service
A tool interface that supports handling of generic command-line options.
A utility to help run
Tools.Partitioner effecting a total order by reading split points from
an externally generated source.
Partitioner effecting a total order by reading split points from
an externally generated source.
Enum representing the version of the tracing header used in Azure Blob File System (ABFS).
Provides a trash facility which supports pluggable Trash policies.
This interface is used for implementing different Trash policies.
Traversal cursor in external filesystem.
Traversal yielding a hierarchical sequence of paths.
Enum to represent 3 values, TRUE, FALSE and UNKNOWN.
Thrown when tried to convert Trilean.UNKNOWN to boolean.
Simple enum to express {true, false, don't know}.
Writable type storing multiple
Writables.Writable type storing multiple
Writables.A Writable for 2D arrays containing a matrix of instances of a class.
The possible type codes.
Pluggable class for mapping ownership and permissions from an external
store to an FSImage.
This class implements a value aggregator that dedupes a sequence of objects.
This class implements a value aggregator that dedupes a sequence of objects.
Thrown when an unknown cipher suite is encountered.
The bucket or other AWS resource is unknown.
File system for a given file system name/scheme is not supported
MultipartUploader for a given file system name/scheme is not supported.
The request sent by the client to the
ResourceManager to set or
update the application priority.
The response sent by the
ResourceManager to the client on update
the application priority.
The request sent by the client to the
ResourceManager to set or
update the application timeout.
The response sent by the
ResourceManager to the client on update
application timeout.UpdateContainerError is used by the Scheduler to notify the
ApplicationMaster of an UpdateContainerRequest it cannot satisfy due to
an error in the request.UpdateContainerRequest represents the request made by an
application to the ResourceManager to update an attribute of a
Container such as its Resource allocation or (@code ExecutionType}An object that encapsulates an updated container and the
type of Update.
Opaque, serializable reference to an uploadId for multipart uploads.
URL represents a serializable URL.This class implements a wrapper for a user defined value aggregator
descriptor.
This class implements a wrapper for a user defined value
aggregator descriptor.
This entity represents a user.
User and group information for Hadoop.
existing types of authentications' methods
The request from clients to the
SharedCacheManager that claims a
resource in the shared cache.
The response from the SharedCacheManager to the client that indicates whether
a requested resource exists in the cache.
Implementation of
Clock that gives the current UTC time in
milliseconds.Supporting Utility classes used by TFile, and shared by users of TFile.
A utility class.
Deprecated.
Replaced by Avro.
Volume access mode.
Volume type.
This interface defines the minimal protocol for value aggregators.
This interface defines the minimal protocol for value aggregators.
This class implements the common functionalities of
the subclasses of ValueAggregatorDescriptor class.
This class implements the common functionalities of
the subclasses of ValueAggregatorDescriptor class.
This class implements the generic combiner of Aggregate.
This class implements the generic combiner of Aggregate.
This interface defines the contract a value aggregator descriptor must
support.
This interface defines the contract a value aggregator descriptor must
support.
This is the main class for creating a map/reduce job using Aggregate
framework.
This is the main class for creating a map/reduce job using Aggregate
framework.
This abstract class implements some common functionalities of the
the generic mapper, reducer and combiner classes of Aggregate.
This abstract class implements some common functionalities of the
the generic mapper, reducer and combiner classes of Aggregate.
This class implements the generic mapper of Aggregate.
This class implements the generic mapper of Aggregate.
This class implements the generic reducer of Aggregate.
This class implements the generic reducer of Aggregate.
This class implements a value aggregator that computes the
histogram of a sequence of strings.
This class implements a value aggregator that computes the
histogram of a sequence of strings.
Policy to decide how many values to return to client when client asks for
"n" values and Queue is empty.
A base class for Writables that provides version checking.
This class returns build information about Hadoop components.
Thrown by
VersionedWritable.readFields(DataInput) when the
version of an object being read does not match the current implementation
version as returned by VersionedWritable.getVersion().ViewFileSystem (extends the FileSystem interface) implements a client-side
mount table.
Utility APIs for ViewFileSystem.
ViewFs (extends the AbstractFileSystem interface) implements a client-side
mount table.
A WritableComparable for integer values stored in variable-length format.
A WritableComparable for longs in a variable-length format.
AbstractFileSystem implementation for HDFS over the web.
Reflection-friendly access to APIs which are not available in
some of the older Hadoop versions which libraries still
compile against.
A
Mapper which wraps a given one to allow custom
Mapper.Context implementations.Proxy class for a RecordReader participating in the join framework.
Proxy class for a RecordReader participating in the join framework.
A
Reducer which wraps a given one to allow for custom
Reducer.Context implementations.Reflection-friendly access to IOStatistics APIs.
A serializable object which implements a simple, efficient, serialization
protocol, based on
DataInput and DataOutput.A
Writable which is also Comparable.A Comparator for
WritableComparables.Factories for non-public writables.
A factory for a class of Writable.
A
Serialization for Writables that delegates to
Writable.write(java.io.DataOutput) and
Writable.readFields(java.io.DataInput).Flags to use when creating/writing objects.
The value of
XAttr is byte[], this class is to
covert byte[] to some kind of string representation or convert back.Class to pack XAttrs into byte[].
Note: this format is used both in-memory and on-disk.
Note: this format is used both in-memory and on-disk.
Protobuf enum
hadoop.hdfs.XAttrProto.XAttrNamespaceProtoProtobuf enum
hadoop.hdfs.XAttrSetFlagProtoThis filter protects webapps from clickjacking attacks that
are possible through use of Frames to embed the resources in another
application and intercept clicks to accomplish nefarious things.
Enumeration of various states of a
RMAppAttempt.Enumeration of various states of an
ApplicationMaster.YarnClusterMetrics represents cluster metrics.YarnException indicates exceptions from yarn servers.
This exception is thrown when a feature is being used which is not enabled
yet.
Protobuf enum
hadoop.yarn.SubClusterStateProtoThis class is intended to be installed by calling
Thread.setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler)
In the main entry point.The type of header for compressed data.
The compression level for zlib library.
The compression level for zlib library.
The headers to detect from compressed data.
State of re-encryption.