Apache Flume Hdfs
- Big Data Charmers
- Big Data
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 7 | 11 Nov 2020 | |
latest/stable | 6 | 11 Nov 2020 | |
latest/edge | 7 | 11 Nov 2020 | |
latest/edge | 4 | 11 Nov 2020 |
juju deploy apache-flume-hdfs
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
-
channel_capacity | string
Default: 1000
The maximum number of events stored in the channel.
-
channel_transaction_capacity | string
Default: 100
The maximum number of events the channel will take from a source or give to a sink per transaction.
-
dfs_replication | int
Default: 3
The DFS replication value. The default (3) is the same default as the Namenode charm, but it may be overriden for this application.
-
protocol | string
Default: avro
Ingestion protocol for the agent source. Currently only 'avro' is supported.
-
resources_mirror | string
URL from which to fetch resources (e.g., Flume binaries) instead of S3
-
roll_count | int
Number of events written to file before it is rolled. A value of 0 (the default) means never roll based on number of events.
-
roll_interval | int
Default: 300
Number of seconds to wait before rolling the current file. Default will roll the file after 5 minutes. A value of 0 means never roll based on a time interval.
-
roll_size | string
Default: 10000000
File size to trigger roll, in bytes. Default will roll the file once it reaches 10 MB. A value of 0 means never roll based on file size.
-
sink_compression | string
Compression codec for the agent sink. An empty value will write events to HDFS uncompressed. You may specify 'snappy' here to compress written events using the snappy codec.
-
sink_serializer | string
Default: text
Specify the serializer used when the sink writes to HDFS. Either 'avro_event' or 'text' are supported.
-
source_port | int
Default: 4141
Port on which the agent source is listening.