Data Quality Papers A-Z

B
Paper 1384-2015:
Building Flexible Jobs in DataFlux®
Creating DataFlux® jobs that can be executed from job scheduling software can be challenging. This presentation guides participants through the creation of a template job that accepts an email distribution list, subject, email verbiage, and attachment file name as macro variables. It also demonstrates how to call this job from a command line.
Read the paper (PDF).
Jeanne Estridge, Sinclair Community College
D
Paper 3261-2015:
Data Quality Scorecard
Many users would like to check the quality of data after the data integration process has loaded the data into a data set or table. The approach in this paper shows users how to develop a process that scores columns based on rules judged against a set of standards set by the user. Each rule has a standard that determines whether it passes, fails, or needs review (a green, red, or yellow score). A rule can be as simple as: Is the value for this column missing, or is this column within a valid range? Further, it includes comparing a column to one or more other columns, or checking for specific invalid entries. It also includes rules that compare a column value to a lookup table to determine whether the value is in the lookup table. Users can create their own rules and each column can have any number of rules. For example, a rule can be created to measure a dollar column to a range of acceptable values. The user can determine that it is expected that up to two percent of the values are allowed to be out of range. If two to five percent of the values are out of range, then data should be reviewed. And, if over five percent of the values are out of range, the data is not acceptable. The entire table has a color-coded scorecard showing each rule and its score. Summary reports show columns by score and distributions of key columns. The scorecard enables the user to quickly assess whether the SAS data set is acceptable, or whether specific columns need to be reviewed. Drill-down reports enable the user to drill into the data to examine why the column scored as it did. Based on the scores, the data set can be accepted or rejected, and the user will know where and why the data set failed. The process can store each scorecard data in a data mart. This data mart enables the user to review the quality of their data over time. It can answer questions such as: is the quality of the data improving overall? Are there specific columns that are improving or declining over time? What can we do to improve the qu ality of our data? This scorecard is not intended to replace the quality control of the data integration or ETL process. It is a supplement to the ETL process. The programs are written using only Base SAS® and Output Delivery System (ODS), macro variables, and formats. This presentation shows how to: (1) use ODS HTML; (2) color code cells with the use of formats; (3) use formats as lookup tables; (4) use INCLUDE statements to make use of template code snippets to simplify programming; and (5) use hyperlinks to launch stored processes from the scorecard.
Read the paper (PDF). | Download the data file (ZIP).
Tom Purvis, Qualex Consulting Services, Inc.
Paper 1762-2015:
Did Your Join Work Properly?
We have many options for performing merges or joins these days, each with various advantages and disadvantages. Depending on how you perform your joins, different checks can help you verify whether the join was successful. In this presentation, we look at some sample data, use different methods, and see what kinds of tests can be done to ensure that the results are correct. If the join is performed with PROC SQL and two criteria are fulfilled (the number of observations in the primary data set has not changed [presuming a one-to-one or many-to-one situation], and a variable that should be populated is not missing), then the merge was successful.
Read the paper (PDF).
Emmy Pahmer, inVentiv Health
Paper 3457-2015:
Documentation as You Go: aka Dropping Breadcrumbs!
Your project ended a year ago, and now you need to explain what you did, or rerun some of your code. Can you remember the process? Can you describe what you did? Can you even find those programs? Visually presented here are examples of tools and techniques that follow best practices to help us, as programmers, manage the flow of information from source data to a final product.
Read the paper (PDF).
Elizabeth Axelrod, Abt Associates Inc.
F
Paper 1752-2015:
Fixing Lowercase Acronyms Left Over from the PROPCASE Function
The PROPCASE function is useful when you are cleansing a database of names and addresses in preparation for mailing. But it does not know the difference between a proper name (in which initial capitalization should be used) and an acronym (which should be all uppercase). This paper explains an algorithm that determines with reasonable accuracy whether a word is an acronym and, if it is, converts it to uppercase.
Read the paper (PDF). | Download the data file (ZIP).
Joe DeShon, Boehringer Ingelheim Vetmedica
Paper 3348-2015:
From ETL to ETL VCM: Ensure the Success of a Data Migration Project through a Flexible Data Quality Framework Using SAS® Data Management
Data quality is now more important than ever before. According to Gartner (2011), poor data quality is the primary reason why 40% of all business initiatives fail to achieve their targeted benefits. As a response, Deloitte Belgium has created an agile data quality framework using SAS® Data Management to rapidly identify and resolve root causes of data quality issues to jump-start business initiatives, especially a data migration one. Moreover, the approach uses both standard SAS Data Management functionalities (such as standardization, parsing, etc.) and advanced features (such as using macros, dynamic profiling in deployment mode, extracting the profiling results, etc.), allowing the framework to be agile and flexible and to maximize the reusability of specific components built in SAS Data Management.
Read the paper (PDF).
yves wouters, Deloitte
Valérie Witters, Deloitte
G
Paper SAS1852-2015:
Garbage In, Gourmet Out: How to Leverage the Power of the SAS® Quality Knowledge Base
Companies spend vast amounts of resources developing and enhancing proprietary software to clean their business data. Save time and obtain more accurate results by leveraging the SAS® Quality Knowledge Base (QKB), formerly a DataFlux® Data Quality technology. Tap into the existing QKB rules for cleansing contact information or product data, or easily design your own custom rules using the QKB editing tools. The QKB enables data management operations such as parsing, standardization, and fuzzy matching for contact information such as names, organizations, addresses, and phone numbers, or for product data attributes such as materials, colors, and dimensions. The QKB supports data in native character sets in over 38 locales. A single QKB can be shared by multiple SAS® Data Management installations across your enterprise, ensuring consistent results on workstations, servers, and massive parallel processing systems such as Hadoop. In this breakout, a SAS R&D manager demonstrates the power and flexibility of the QKB, and answers your questions about how to deploy and customize the QKB for your environment.
Read the paper (PDF).
Brian Rineer, SAS
M
Paper 2481-2015:
Managing Extended Attributes With a SAS® Enterprise Guide® Add-In
SAS® 9.4 introduced extended attributes, which are name-value pairs that can be attached to either the data set or to individual variables. Extended attributes are managed through PROC DATASETS and can be viewed through PROC CONTENTS or through Dictionary.XATTRS. This paper describes the development of a SAS® Enterprise Guide® custom add-in that allows for the entry and editing of extended attributes, with the possibility of using a controlled vocabulary. The controlled vocabulary used in the initial application is derived from the lifecycle branch of the Data Documentation Initiative metadata standard (DDI-L).
Read the paper (PDF).
Larry Hoyle, IPSR, Univ. of Kansas
N
Paper 3455-2015:
Nifty Uses of SQL Reflexive Join and Subquery in SAS®
SAS® SQL is so powerful that you hardly miss using Oracle PL/SQL. One SAS SQL forte can be found in using the SQL reflexive join. Another area of SAS SQL strength is the SQL subquery concept. The focus of this paper is to show alternative approaches to data reporting and to show how to surface data quality problems using reflexive join and subquery SQL concepts. The target audience for this paper is the intermediate SAS programmer or the experienced ANSI SQL programmer new to SAS programming.
Read the paper (PDF).
Cynthia Trinidad, Theorem Clinical Research
S
Paper SAS1907-2015:
SAS® Data Management: Technology Options for Ensuring a Quality Journey Through the Data Management Process
When planning for a journey, one of the main goals is to get the best value possible. The same thing could be said for your corporate data as it journeys through the data management process. It is your goal to get the best data in the hands of decision makers in a timely fashion, with the lowest cost of ownership and the minimum number of obstacles. The SAS® Data Management suite of products provides you with many options for ensuring value throughout the data management process. The purpose of this session is to focus on how the SAS® Data Management solution can be used to ensure the delivery of quality data, in the right format, to the right people, at the right time. The journey is yours, the technology is ours--together, we can make it a fulfilling and rewarding experience.
Read the paper (PDF).
Mark Craver, SAS
Paper 3309-2015:
Snapshot SNAFU: Preventative Measures to Safeguard Deliveries
Little did you know that your last delivery ran on incomplete data. To make matters worse, the client realized the issue first. Sounds like a horror story, no? A few preventative measures can go a long way in ensuring that your data are up-to-date and progressing normally. At the data set level, metadata comparisons between the current and previous data cuts will help identify observation and variable discrepancies. Comparisons will also uncover attribute differences at the variable level. At the subject level, they will identify missing subjects. By compiling these comparison results into a comprehensive scheduled e-mail, a data facilitator need only skim the report to confirm that the data is good to go--or in need of some corrective action. This paper introduces a suite of checks contained in a macro that will compare data cuts in the data set, variable, and subject levels and produce an e-mail report. The wide use of this macro will help all SAS® users create better deliveries while avoiding rework.
Read the paper (PDF). | Download the data file (ZIP).
Spencer Childress, Rho,Inc
Alexandra Buck, Rho, Inc.
Paper 3209-2015:
Standardizing the Standardization Process
A prevalent problem surrounding Extract, Transform, and Load (ETL) development is the ability to apply consistent logic and manipulation of source data when migrating to target data structures. Certain inconsistencies that add a layer of complexity include, but are not limited to, naming conventions and data types associated with multiple sources, numerous solutions applied by an array of developers, and multiple points of updating. In this paper, we examine the evolution of implementing a best practices solution during the process of data delivery, with respect to standardizing data. The solution begins with injecting the transformations of the data directly into the code at the standardized layer via Base SAS® or SAS® Enterprise Guide®. A more robust method that we explore is to apply these transformations with SAS® macros. This provides the capability to apply these changes in a consistent manner across multiple sources. We further explore this solution by implementing the macros within SAS® Data Integration Studio processes on the DataFlux® Data Management Platform. We consider these issues within the financial industry, but the proposed solution can be applied across multiple industries.
Read the paper (PDF).
Avery Long, Financial Risk Group
Frank Ferriola, Financial Risk Group
U
Paper 3202-2015:
Using SAS® Mapping Functionality to Measure and Present the Veracity of Location Data
Crowd sourcing of data is growing rapidly, enabled by smart devices equipped with assisted GPS location, tagging of photos, and mapping other aspects of the users' lives and activities. A fundamental assumption that the reported locations are accurate within the usual GPS limitations of approximately 10m is made when such data is used. However, as a result of a wide range of technical issues, it turns out that the accuracy of the reported locations is highly variable and cannot be relied on; some locations are accurate but many are highly inaccurate, and that can affect many of the decisions that are being made based on the data. An analysis of a set of data is presented that demonstrates that this assumption is flawed, and examples of the levels of inaccuracy that has significant consequences in a range of contexts are provided. By using Base SAS®, the paper demonstrates the quality and veracity of the data and the scale of the errors that can be present. This analysis has critical significance in fields such as mobile location-based marketing, forensics, and law.
Read the paper (PDF).
Richard Self, University of Derby
W
Paper SAS1440-2015:
Want an Early Picture of the Data Quality Status of Your Analysis Data? SAS® Visual Analytics Shows You How
When you are analyzing your data and building your models, you often find out that the data cannot be used in the intended way. Systematic pattern, incomplete data, and inconsistencies from a business point of view are often the reason. You wish you could get a complete picture of the quality status of your data much earlier in the analytic lifecycle. SAS® analytics tools like SAS® Visual Analytics help you to profile and visualize the quality status of your data in an easy and powerful way. In this session, you learn advanced methods for analytic data quality profiling. You will see case studies based on real-life data, where we look at time series data from a bird's-eye-view and interactively profile GPS trackpoint data from a sail race.
Read the paper (PDF). | Download the data file (ZIP).
Gerhard Svolba, SAS
Y
Paper 3407-2015:
You Say Day-ta, I Say Dat-a: Measuring Coding Differences, Considering Integrity and Transferring Data Quickly
We all know there are multiple ways to use SAS® language components to generate the same values in data sets and output (for example, using the DATA step versus PROC SQL, If-Then-Elses versus Format table conversions, PROC MEANS versus PROC SQL summarizations, and so on). However, do you compare those different ways to determine which are the most efficient in terms of computer resources used? Do you ever consider the time a programmer takes to develop or maintain code? In addition to determining efficient syntax, do you validate your resulting data sets? Do you ever check data values that must remain the same after being processed by multiple steps and verify that they really don't change? We share some simple coding techniques that have proven to save computer and human resources. We also explain some data validation and comparison techniques that ensure data integrity. In our distributed computing environment, we show a quick way to transfer data from a SAS server to a local client by using PROC DOWNLOAD and then PROC EXPORT on the client to convert the SAS data set to a Microsoft Excel file.
Read the paper (PDF).
Jason Beene, WellsFargo
Mary Katz, Wells Fargo Bank
back to top