Use the Import a File
directive to copy a delimited source file into a target table in HDFS
and register the target in Hive.
As you use the directive,
it samples the source data and generates default column definitions
for the target. You can then edit the column names, types, and lengths.
To simplify future imports,
the Import a File directive enables you to save column definitions
to a file and import column definitions from a file. After you import
column definitions, you can then edit those definitions and update
the column definitions file.
The directive can be
configured to create delimited Text-format targets in Hadoop using
an efficient bulk-copy operation.
In the source file,
the delimiter must consist of a single character or symbol. The delimiter
must have an ASCII character code in the range of 0 to 255.
To learn more about
delimiters and column definitions files, refer to the following example.
To copy database tables
into Hadoop using a database-specific JDBC driver, use
the Copy Data to Hadoop directive.