Use the Import a File
directive to copy a delimited source file into a target table in HDFS
and register the target in Hive.
The directive samples
the source data and generates default column definitions for the target.
You can then edit the column names, types, and lengths.
To simplify future imports,
the Import a File directive enables you to save column definitions
to a file and import column definitions from a file. After you import
column definitions, you can then edit those definitions and update
the column definitions file.
The directive can be
configured to create delimited Text-format targets in Hadoop using
an efficient bulk-copy operation.
In the source file,
the delimiter must be a single character or symbol. The delimiter
must have an ASCII character code in the range of 0 to 127 (\000 to
\177 octal).
To learn more about
delimiters and column definitions files, see the following example.
To copy database tables
into Hadoop using a database-specific JDBC driver, use
the Copy Data to Hadoop directive.