Converting Metadata for Environments and Warehouses |
Overview |
After an Environment has been converted with the Metadata Conversion wizard, and you have verified that the Environment's metadata is pointing to the correct local data stores, code libraries, or other local resources, the processes for its tables should work as they did before the conversion. However, before you try executing processes in the new Process Editor, keep the following in mind:
The Release 2.0 Process Editor has changed considerably from previous releases. For details about the Process Editor, click its window, then from the main menu, select Help Using This Window
The Metadata Conversion wizard creates a default process flow and Job for each data store with its own LOAD step in the original Environment. As a result, complex Process Flow diagrams in your old Environment will be separated into their constituent diagrams--one for each data store with a LOAD step.
In Release 2.0, the steps for executing a process flow have changed. Details are provided in the next section.
What the Converted Process Flows Look Like |
The Metadata Conversion wizard creates a default process flow and Job for each data store with its own LOAD step in the original Environment. For example, suppose that before metadata conversion, the process flow for Credit Data Table looked like the one in Release 1.x Process Flow.
In a Release 1.x process flow, each icon (with the exception of inputs to ODDs) has its own load step. Accordingly, in the previous display, ODD 1 and Credit Data Table each have their own LOAD steps.
After running the Metadata Conversion wizard, there would be a Job and a data flow for ODD 1, and there would be a separate Job and data flow for Credit Data Table. For example, after metadata conversion, if you opened ODD 1 in the Release 2.0 Process Editor, the following pair of items would be displayed in the left panel:
ODD 1 Release 2.0 Job Hierarchy
In ODD 1 Release 2.0 Job Hierarchy, the item with the rectangle around it is the output table for the Job. The Job is represented by the parent icon of the output table.
A Job is a new Process Editor object in Release 2.0. It is a metadata record that specifies the processes that create one or more data stores. The processes can be specified with a process flow diagram in the Process Editor. If a process flow diagram is specified, SAS/Warehouse Administrator can generate code for the Job. Alternatively, a Job can reference a user-supplied program that contains the processes that create the data store(s). A Job can include scheduling metadata that enables the process flow or user-supplied program to be executed in batch mode at a specified date and time.
In the Process Editor, to the right of the Job and its output table, any process flow associated with the current Job is displayed, as shown in ODD 1 Release 2.0 Process Flow.
ODD 1 Release 2.0 Process Flow
Credit Data Table would have its own Job and process flow, as shown in Credit Data Table Release 2.0 Job Hierarchy and Credit Data Table Release 2.0 Process Flow.
Credit Data Table Release 2.0 Job Hierarchy
Credit Data Table Release 2.0 Process Flow
Note that in Credit Data Table Release 2.0 Process Flow, ODD 1 appears as an input to a mapping process that feeds Credit Data Table. The Credit Data Table Job created by the Metadata Conversion wizard will not generate the code that creates and loads ODD 1. ODD 1 is assumed to be loaded and available for the Credit Data Table Job. In order to create Credit Data Table, then, you would execute the ODD 1 Job shown in ODD 1 Release 2.0 Job Hierarchy, then execute the Credit Data Table Job shown in Credit Data Table Release 2.0 Job Hierarchy.
Note: In Release 2.0, you can specify multiple output tables in one Job as long as the metadata for the output tables is within the same repository (within the same Data Warehouse, for example).
The Metadata Conversion wizard does not automatically combine all of the data stores (output tables) in a given process flow into one Job because
the site administrator must decide which Job should create the data store, and which Jobs should assume that this data store has already been created.
Also, not all of the metadata for the data stores in a given process flow is in the same repository. For example, in Credit Data Table Release 2.0 Process Flow, the metadata for ODD 1 is stored at the Environment level (in the _MASTER repository), while the metadata for Credit Data Table is stored at the Data Warehouse level (in the current _DWMD repository).
For details about specifying multiple output tables in one Job, see the "Maintaining Jobs" chapter in this document.
Testing Converted Process Flows |
One way to test a Release 1.x process flow after it has been converted to Release 2.0 is to execute each Job interactively.
Note: Load the input tables first, then the output tables.
For example, in order to test the Credit Data Table process flow, you would execute the ODD 1 Job shown in ODD 1 Release 2.0 Job Hierarchy, then execute the Credit Data Table Job shown in Credit Data Table Release 2.0 Job Hierarchy. Here are the general steps for executing process flows in Release 2.0.
If you have not done so already, open the converted Environment in SAS/Warehouse Administrator Release 2.0.
For details, see Opening a Converted Environment for the First Time.
In the Explorer, expand the relevant groups until the table whose process flow you want to execute is displayed.
In the Explorer, position the cursor on the table whose process flow you want to execute, click the right mouse button, and select Process.
The table you selected will be opened in the Process Editor. The left panel of the Process Editor will contain a Job and an output table for the table you selected, similar to ODD 1 Release 2.0 Job Hierarchy. The right panel will contain the process flow for the table you selected, similar to ODD 1 Release 2.0 Process Flow.
In the left panel (Job Hierarchy view), position the cursor on the Job for the table whose process flow you want to execute, click the right mouse button, and select Run.
To verify that the process flow executed successfully, review the SAS log and check the output of the process on the file system.
If the process flow fails to execute because a local data store, code library, or other local resource cannot be found, verify that the metadata for this object in the Release 2.0 Environment is pointing to the correct location. For details on this issue, see Verifying Local Resources in the Converted Environment.
Copyright © 2012 by SAS Institute Inc., Cary, NC, USA. All rights reserved.