To create and manage access to directories in
HDFS, log on to the machine that hosts the NameNode, and use the
hadoop
command.
Note: To get started, use the hdfs
user account to create and manage directories. Once the beginning
of a directory structure is created and permissions are changed, other
user accounts can be used to manage access to directories.
To create a general
purpose directory:
-
As the hdfs user account,
create a directory named /shared
:
./hadoop fs -mkdir /shared
-
Open up access permissions
on the directory:
./hadoop fs -chmod 1777 /shared
Note: This permissions mode enables
only the superuser, directory owner, and file owner to delete or move
files within the directory.
-
Confirm that the commands
succeeded:
./hadoop fs -ls /
Found 3 items
drwxr-xr-x - hdfs supergroup 0 2014-02-03 21:38 /data
drwxrwxrwt - hadoop supergroup 0 2014-02-14 21:23 /shared
drwxrwxrwt - hdfs supergroup 0 2014-01-17 11:07 /tmp
drwxr-xr-x - hdfs supergroup 0 2014-02-13 08:45 /user
To set up a directory
for members of the sales
group:
-
Create a directory named /dept/sales
:
./hadoop fs -mkdir -p /dept/sales
-
Change the group ID:
./hadoop fs -chgrp sales /dept/sales
Note: The preceding command assumes
that an operating system group that is named sales exists. You can
use the SAS High-Performance Computing Management Console to create
the group on the machines in the cluster. After you create the group,
stop and then start Hadoop (so that the group is recognized).
-
Provide access to only
the hdfs user account and members of the sales
group:
./hadoop fs -chmod 770 /dept/sales
-
Confirm that the commands
succeeded:
./hadoop fs -ls /dept
Found 1 items
drwxrwx--- - hdfs sales 0 2014-02-14 21:29 /dept/sales
Note: The HDFS directory structure
is similar to a UNIX file system. Directories have a user ID, group
ID, and associated access permissions. More information about the
hadoop
command
is available from
http://hadoop.apache.org.