Imbalanced data are frequently seen in fraud detection, direct marketing, disease prediction, and many other areas. Rare events are sometimes of primary interest. Classifying them correctly is the challenge that many predictive modelers face today. In this paper, we use SAS® Enterprise Miner™ on a marketing data set to demonstrate and compare several approaches that are commonly used to handle imbalanced data problems in classification models. The approaches are based on cost-sensitive measures and sampling measures. A rather novel technique called SMOTE (Synthetic Minority Over-sampling TEchnique), which has achieved the best result in our comparison, will be discussed.
Ruizhe Wang, GuideWell Connect
Novik Lee, Guidewell Connect
Yun Wei, Guidewell Connect
This paper illustrates a high-level infrastructure discussion with some explanation of the SAS® codes used to implement a configurable batch framework for managing and updating the data rows and row-level permissions in SAS® OLAP Cube Studio. The framework contains a collection of reusable, parameter-driven Base SAS® macros, Base SAS custom programs, and UNIX or LINUX shell scripts. This collection manages the typical steps and processes used for manipulating SAS files and for executing SAS statements. The Base SAS macro collection contains a group of utility macros that includes: concurrent /parallel processing macros, SAS® Metadata Repository macros, SAS® Scalable Performance Data Engine table macros, table lookup macros, table manipulation macros, and other macros. There is also a group of OLAP-related macros that includes OLAP utility macros and OLAP permission table processing macros.
Ahmed Al-Attar, AnA Data Warehousing Consulting, LLC
When large amounts of data are available, choosing the variables for inclusion in model building can be problematic. In this analysis, a subset of variables was required from a larger set. This subset was to be used in a later cluster analysis with the aim of extracting dimensions of human flourishing. A genetic algorithm (GA), written in SAS®, was used to select the subset of variables from a larger set in terms of their association with the dependent variable life satisfaction. Life satisfaction was selected as a proxy for an as yet undefined quantity, human flourishing. The data were divided into subject areas (health, environment). The GA was applied separately to each subject area to ensure adequate representation from each in the future analysis when defining the human flourishing dimensions.
Lisa Henley, University of Canterbury
This session is intended to assist analysts in generating the best variables, such as monthly amount paid, daily number of received customer service calls, weekly worked hours on a project, or annual number total sales for a specific product, by using simple arithmetic operators (square root, log, loglog, exp, and rcp). During a statistical data modeling process, analysts are often confronted with the task of computing derived variables using the existing variables. The advantage of this methodology is that the new variables might be more significant than the original ones. This paper provides a new way to compute all the possible variables using a set of math transformations. The code includes many SAS® features that are very useful tools for SAS programmers to incorporate in their future code such as %SYSFUNC, SQL, %INCLUDE, CALL SYMPUT, %MACRO, SORT, CONTENTS, MERGE, MACRO _NULL_, as well as %DO &%TO & and many more
Nancy Hu, Discover
This paper introduces a macro that generates a calendar report in two different formats. The first format displays the entire month in one plot, which is called a month-by-month calendar report. The second format displays the entire month in one row and is called an all-in-one calendar report. To use the macro, you just need to prepare a simple data set that has three columns: one column identifies the ID, one column contains the date, and one column specifies the notes for the dates. On the generated calendar reports, you can include notes and add different styles to certain dates. Also, the macro provides the option for you to decide whether those months in your data set that do not contain data should be shown on the reports.
Ting Sa, Cincinnati Children's Hospital Medical Center
When promoting metadata in large packages from SAS® Data Integration Studio between environments, metadata and the underlying physical data can become out of sync. This can result in metadata items that cannot be opened by users because SAS® has thrown an error. It often falls to the SAS administrator to resolve the synchronization issues when they might not have been responsible for promoting the metadata items in the first place. In this paper, we will discuss a simple macro that can be used to compare the table metadata to that of the physical tables, and any anomalies will be noted.
David Moors, Whitehound Limited
In big data, many variables are polytomous with many levels. The common method to deal with polytomous independent variables is to use a series of design variables, which correspond to the option class or by in the polytomous independent variable in PROC LOGISTIC, if the outcome is binary. If big data has many polytomous independent variables with many levels, using design variables makes the analysis processing very complicated in both computation time and result, which might provide little help on the prediction of outcome. This paper presents a new simple method for logistic regression with polytomous independent variables in big data analysis when analysis of big data is required. In the proposed method, the first step is to conduct an iteration statistical analysis from a SAS® macro program. Similar to an algorithm in the creation of spline variables, this analysis searches for the proper aggregation groups with a statistical significant difference from all levels in a polytomous independent variable. In the SAS macro program for an iteration, processing of searching new level groups with statistical significant differences has been developed. The first is from level 1 with the smallest value of the outcome means. Then we can conduct a statistical test for the level 1 group with the level 2 group with the second smallest value of outcome mean. If these two groups have a statistical significant difference, we can start to test the level 2 group with the level 3 group. If level 1 and level 2 do not have a statistical significant difference, we can combine them into a new level group 1. Then we are going to test the new level group 1 with level 3. The processing continues until all the levels have been tested. Then we can replace the original level values of the polytomous variable by the new level values with the statistical significant difference. In this situation, the polytomous variable with new levels can be described by these means of all new levels because of the 1
to 1 equivalence relationship of a piecewise function in logit from the polytomous's levels to outcome means. It is very easy to approve that the conditional mean of an outcome y given a polytomous variable x is a very good approximation based on the maximum likelihood analysis. Compared with design variables, the new piecewise variable based on the information of all levels as a single independent variable can capture the impact of all levels in a much simpler way. We have used this method in the predictive models of customer attrition on the polytomous variables: state, business type, customer claim type, and so on. All of these polytomous variables show significant improvement on the prediction of customer attrition than without using them or using design variables in the model development.
jian gao, constant contact
jesse harriot, constant contact
lisa Pimentel, constant contact
Modernizing SAS® assets within an enterprise is key to reducing costs and improving productivity. Modernization implies consolidating multiple SAS environments into a single shared enterprise SAS deployment. While the benefits of modernization are clear, the management of a single-enterprise deployment is sometimes a struggle between business units who once had autonomy and IT that is now responsible for managing this shared infrastructure. The centralized management and control of a SAS deployment is based on SAS metadata. This paper provides a practical approach to the shared management of a centralized SAS deployment using SAS® Management Console. It takes into consideration the day-to-day needs of the business and IT requirements including centralized security, monitoring, and management. This document defines what resources are contained in SAS metadata, what responsibilities should be centrally controlled, and the pros and cons of distributing the administration of metadata content across the enterprise. This document is intended as a guide for SAS administrators and assumes that you are familiar with the concepts and terminology introduced in SAS® 9.4 Intelligence Platform: Security Administration Guide.
Jim Fenton, SAS
Robert Ladd, SAS
All SAS® data sets and variables have standard attributes. These include items such as creation date, engine, compression and sort information for data sets, and format and length information for variables. However, for the first time in SAS 9.4, the developer can add their own customized attributes to both data sets and variables. This paper shows how these extended attributes can be created, modified, and maintained. It suggests the sort of items that might be candidates for use as extended attributes and explains in what circumstances they can be used. It also provides a worked example of how they can be used to inform and aid the SAS programmer in creating SAS applications.
Chris Brooks, Melrose Analytics Ltd
Looking for a handy technique to have in your toolkit? Consider SAS® Views®, especially if you work with large data sets. After a brief introduction to SAS Views, I'll show you several cool ways to use them that will streamline your code and save workspace.
Elizabeth Axelrod, Abt Associates Inc.
Medical tests are used for various purposes including diagnosis, prognosis, risk assessment and screening. Statistical methodology is used often to evaluate such types of tests, most frequent measures used for binary data being sensitivity, specificity, positive and negative predictive values. An important goal in diagnostic medicine research is to estimate and compare the accuracies of such tests. In this paper I give a gentle introduction to measures of diagnostic test accuracy and introduce a SAS® macro to calculate generalized score statistic and weighted generalized score statistic for comparison of predictive values using formula's generalized and proposed by Andrzej S. Kosinski.
Lovedeep Gondara, University of Illinois Springfield
The paper introduces users to how they can use a set of SAS® macros, %LIFETEST and %LIFETESTEXPORT, to generate survival analysis reports for data with or without competing risks. The macros provide a wrapper of PROC LIFETEST and an enhanced version of the SAS autocall macro %CIF to give users an easy-to-use interface to report both survival estimates and cumulative incidence estimates in a unified way. The macros also provide a number of parameters to enable users to flexibly adjust how the final reports should look without the need to manually input or format the final reports.
Zhen-Huan Hu, Medical College of Wisconsin
Delimited text files are often plagued by appended and/or truncated records. Writing customized SAS® code to import such a text file and break out into fields can be challenging. If only there was a way to fix the file before importing it. Enter the file_fixing_tool, a SAS® Enterprise Guide® project that uses the SAS PRX functions to import, fix, and export a delimited text file. This fixed file can then be easily imported and broken out into fields.
Paul Genovesi, Henry Jackson Foundation for the Advancement of Military Medicine, Inc.
SAS® Visual Analytics provides self-service capabilities for users to analyze, explore, and report on their own data. As users explore their data, there is always a need to bring in more data sources, create new variables, combine data from multiple sources, and even update your data occasionally. SAS Visual Analytics provides targeted user capabilities to access, modify, and enhance data suitable for specific business needs. This paper provides a clear understanding of these capabilities and suggests best practices for self-service data management in SAS Visual Analytics.
Gregor Herrmann, SAS
Based on work by Thall et al. (2012), we implement a method for randomizing patients in a Phase II trial. We accumulate evidence that identifies which dose(s) of a cancer treatment provide the most desirable profile, per a matrix of efficacy and toxicity combinations rated by expert oncologists (0-100). Experts also define the region of Good utility scores and criteria of dose inclusion based on toxicity and efficacy performance. Each patient is rated for efficacy and toxicity at a specified time point. Simulation work is done mainly using PROC MCMC in which priors and likelihood function for joint outcomes of efficacy and toxicity are defined to generate posteriors. Resulting joint probabilities for doses that meet the inclusion criteria are used to calculate the mean utility and probability of having Good utility scores. Adaptive randomization probabilities are proportional to the probabilities of having Good utility scores. A final decision of the optimal dose will be made at the end of the Phase II trial.
Qianyi Huang, McDougall Scientific Ltd.
John Amrhein, McDougall Scientific Ltd.
Fitting mixed models to complicated data, such as data that include multiple sources of variation, can be a daunting task. SAS/STAT® software offers several procedures and approaches for fitting mixed models. This paper provides guidance on how to overcome obstacles that commonly occur when you fit mixed models using the MIXED and GLIMMIX procedures. Examples are used to showcase procedure options and programming techniques that can help you overcome difficult data and modeling situations.
Kathleen Kiernan, SAS
Are we alone in this universe? This is a question that undoubtedly passes through every mind several times during a lifetime. We often hear a lot of stories about close encounters, Unidentified Flying Object (UFO) sightings and other mysterious things, but we lack the documented evidence for analysis on this topic. UFOs have been a matter of interest in the public for a long time. The objective of this paper is to analyze one database that has a collection of documented reports of UFO sightings to uncover any fascinating story related to the data. Using SAS® Enterprise Miner™ 13.1, the powerful capabilities of text analytics and topic mining are leveraged to summarize the associations between reported sightings. We used PROC GEOCODE to convert addresses of sightings to the locations on the map. Then we used PROC GMAP procedure to produce a heat map to represent the frequency of the sightings in various locations. The GEOCODE procedure converts address data to geographic coordinates (latitude and longitude values). These geographic coordinates can then be used on a map to calculate distances or to perform spatial analysis. On preliminary analysis of the data associated with sightings, it was found that the most popular words associated with UFOs tell us about their shapes, formations, movements, and colors. The Text Profiler node in SAS Enterprise Miner 13.1 was leveraged to build a model and cluster the data into different levels of segment variable. We also explain how the opinions about the UFO sightings change over time using Text Profiling. Further, this analysis uses the Text Profile node to find interesting terms or topics that were used to describe the UFO sightings. Based on the feedback received at SAS® analytics conference, we plan to incorporate a technique to filter duplicate comments and include weather in that location.
Pradeep Reddy Kalakota, Federal Home Loan Bank of Desmoines
Naresh Abburi, Oklahoma State University
Goutam Chakraborty, Oklahoma State University
Zabiulla Mohammed, Oklahoma State University
In 2013, the University of North Carolina (UNC) at Chapel Hill initiated enterprise-wide use of SAS® solutions for reporting and data transformations. Just over one year later, the initial rollout was scheduled to go live to an audience of 5,500 users as part of an adoption of PeopleSoft ERP for Finance, Human Resources, Payroll, and Student systems. SAS® Visual Analytics was used for primary report delivery as an embedded resource within the UNC Infoporte, an existing portal. UNC made the date. With the SAS solutions, UNC delivered the data warehouse and initial reports on the same day that the ERP systems went live. After the success of the initial launch, UNC continues to develop and evolve the solution with additional technologies, data, and reports. This presentation touches on a few of the elements required for a medium to large size organization to integrate SAS solutions such as SAS Visual Analytics and SAS® Enterprise Business Intelligence within their infrastructure.
Jonathan Pletzke, UNC Chapel Hill
Ordinary least squares regression is one of the most widely used statistical methods. However, it is a parametric model and relies on assumptions that are often not met. Alternative methods of regression for continuous dependent variables relax these assumptions in various ways. This paper explores procedures such as QUANTREG, ADAPTIVEREG, and TRANSREG for these kinds of data.
Peter Flom, Peter Flom Consulting
Missing data are a common and significant problem that researchers and data analysts encounter in applied research. Because most statistical procedures require complete data, missing data can substantially affect the analysis and the interpretation of results if left untreated. Methods to treat missing data have been developed so that missing values are imputed and analyses can be conducted using standard statistical procedures. Among these missing data methods, multiple imputation has received considerable attention and its effectiveness has been explored (for example, in the context of survey and longitudinal research). This paper compares four multiple imputation approaches for treating missing continuous covariate data under MCAR, MAR, and NMAR assumptions, in the context of propensity score analysis and observational studies. The comparison of the four MI approaches in terms of bias in parameter estimates, Type I error rates, and statistical power is presented. In addition, complete case analysis (listwise deletion) is presented as the default analysis that would be conducted if missing data are not treated. Issues are discussed, and conclusions and recommendations are provided.
Patricia Rodriguez de Gil, University of South Florida
Shetay Ashford, University of South Florida
Chunhua Cao, University of South Florida
Eun-Sook Kim, University of South Florida
Rheta Lanehart, University of South Florida
Reginald Lee, University of South Florida
Jessica Montgomery, University of South Florida
Yan Wang, University of South Florida
Whenever you travel, whether it's to a new destination or to your favorite vacation spot, it's nice to have a guide to assist you with planning and setting expectations. The ODS LAYOUT statement became production in SAS® 9.4. For those intrepid programmers who used ODS LAYOUT in an earlier release of SAS®, this paper contains tons of information about changes you need to know about. Programmers new to SAS 9.4 (or new to ODS LAYOUT) need to understand the basics. This paper reviews some common issues customers have reported to SAS Technical Support when migrating to the LAYOUT destination in SAS 9.4 and explores the basics for those who are making their first foray into the adventure that is ODS LAYOUT. This paper discusses some tips and tricks to ensure that your trip through the ODS LAYOUT statement will be a fun and rewarding one.
Scott Huntley, SAS
In the very near future you will likely encounter Hadoop. It is rapidly displacing database management systems in the corporate world and is rearing its head in the SAS® world. If you think now is the time to learn how to use SAS with Hadoop, you are in luck. This workshop is the jump start you need. This workshop introduces Hadoop and shows you how to access it by using SAS/ACCESS® Interface to Hadoop. During the workshop, we show you how to do the following: how to configure your SAS environment so that you can access Hadoop data; how to use the Hadoop FILENAME statement; how to use the HADOOP procedure; and how to use SAS/ACCESS Interface to Hadoop (including performance tuning).
Diane Hatcher, SAS
The importance of econometrics in the analytics toolkit is increasing every day. Econometric modeling helps uncover structural relationships in observational data. This paper highlights the many recent changes to the SAS/ETS® portfolio that increase your power to explain the past and predict the future. Examples show how you can use Bayesian regression tools for price elasticity modeling, use state space models to gain insight from inconsistent time series, use panel data methods to help control for unobserved confounding effects, and much more.
Mark Little, SAS
Kenneth Sanford, SAS
This paper provides an overview of analysis of data derived from complex sample designs. General discussion of how and why analysis of complex sample data differs from standard analysis is included. In addition, a variety of applications are presented using PROC SURVEYMEANS, PROC SURVEYFREQ, PROC SURVEYREG, PROC SURVEYLOGISTIC, and PROC SURVEYPHREG, with an emphasis on correct usage and interpretation of results.
Patricia Berglund, University of Michigan
This presentation provides an overview of the advancement of analytics in National Hockey League (NHL) hockey, including how it applies to areas such as player performance, coaching, and injuries. The speaker discusses his analysis on predicting concussions that was featured in the New York Times, as well as other examples of statistical analysis in hockey.
Peter Tanner, Capital One
Data mining and predictive models are extensively used to find the optimal customer targets in order to maximize the return on investment. Direct marketing techniques target all the customers who are likely to buy regardless of the customer classification. In a real sense, this mechanism couldn't classify the customers who are going to buy even without a marketing contact, thereby resulting in a loss on investment. This paper focuses on the Incremental Lift modeling approach using Weight of Evidence Coding and Information Value followed by Incremental Response and Outcome model Diagnostics. This model identifies the additional purchases that would not have taken place without a marketing campaign. Modeling work was conducted using a combined model. The research work is carried out on Travel Center data. This data identifies the increase in average response rate by 2.8% and the number of fuel gallons by 244 when compared with the results from the traditional campaign, which targeted everyone. This paper discusses in detail the implementation of the 'Incremental Response' node to direct the marketing campaigns and its Incremental Revenue and Profit analysis.
Sravan Vadigepalli, Best Buy
Approximately 80% of world trade at present uses the seaways, with around 110,000 merchant vessels and 1.25 million marine farers transported and almost 6 billion tons of goods transferred every year. Marine piracy stands as a serious challenge to sea trade. Understanding how the pirate attacks occur is crucial in effectively countering marine piracy. Predictive modeling using the combination of textual data with numeric data provides an effective methodology to derive insights from both structured and unstructured data. 2,266 text descriptions about pirate incidents that occurred over the past seven years, from 2008 to the second quarter of 2014, were collected from the International Maritime Bureau (IMB) website. Analysis of the textual data using SAS® Enterprise Miner™ 12.3, with the help of concept links, answered questions on certain aspects of pirate activities, such as the following: 1. What are the arms used by pirates for attacks? 2. How do pirates steal the ships? 3. How do pirates escape after the attacks? 4. What are the reasons for occasional unsuccessful attacks? Topics are extracted from the text descriptions using a text topic node, and the varying trends of these topics are analyzed with respect to time. Using the cluster node, attack descriptions are classified into different categories based on attack style and pirate behavior described by a set of terms. A target variable called Attack Type is derived from the clusters and is combined with other structured input variables such as Ship Type, Status, Region, Part of Day, and Part of Year. A Predictive model is built with Attact Type as the target variable and other structured data variables as input predictors. The Predictive model is used to predict the possible type of attack given the details of the ship and its travel. Thus, the results of this paper could be very helpful for the shipping industry to become more aware of possible attack types for different vessel types when traversing different routes
, and to devise counter-strategies in reducing the effects of piracy on crews, vessels, and cargo.
Raghavender Reddy Byreddy, Oklahoma State University
Nitish Byri, Oklahoma State University
Goutam Chakraborty, Oklahoma State University
Tejeshwar Gurram, Oklahoma State University
Anvesh Reddy Minukuri, Oklahoma State University
Data comes from a rich variety of sources in a rich variety of types, shapes, sizes, and properties. The analysis can be challenged by data that is too tall or too wide; too full of miscodings, outliers, or holes; or that contains funny data types. Wide data, in particular, has many challenges, requiring the analysis to adapt with different methods. Making covariance matrices with 2.5 billion elements is just not practical. JMP® 12 will address these challenges.
John Sall, SAS
This analysis is based on data for all transactions at four parking meters within a small area in central Copenhagen for a period of four years. The observations show the exact minute parking was bought and the amount of time for which parking was bought in each transaction. These series of at most 80,000 transactions are aggregated to the hour, day, week, and month using PROC TIMESERIES. The aggregated series of parking times and the number of transactions are analyzed for seasonality and interdependence by PROC X12, PROC UCM, and PROC VARMAX.
Anders Milhoj, Copenhagen University
In many spatial analysis applications (including crime analysis, epidemiology, ecology, and forestry), spatial point process modeling can help you study the interaction between different events and help you model the process intensity (the rate of event occurrence per unit area). For example, crime analysts might want to estimate where crimes are likely to occur in a city and whether they are associated with locations of public features such as bars and bus stops. Forestry researchers might want to estimate where trees grow best and test for association with covariates such as elevation and gradient. This paper describes the SPP procedure, new in SAS/STAT® 13.2, for exploring and modeling spatial point pattern data. It describes methods that PROC SPP implements for exploratory analysis of spatial point patterns and for log-linear intensity modeling that uses covariates. It also shows you how to use specialized functions for studying interactions between points and how to use specialized analytical graphics to diagnose log-linear models of spatial intensity. Crime analysis, forestry, and ecology examples demonstrate key features of PROC SPP.
Pradeep Mohan, SAS
Randy Tobias, SAS
The Ebola virus outbreak is producing some of the most significant and fastest trending news throughout the globe today. There is a lot of buzz surrounding the deadly disease and the drastic consequences that it potentially poses to mankind. Social media provides the basic platforms for millions of people to discuss the issue and allows them to openly voice their opinions. There has been a significant increase in the magnitude of responses all over the world since the death of an Ebola patient in a Dallas, Texas hospital. In this paper, we aim to analyze the overall sentiment that is prevailing in the world of social media. For this, we extracted the live streaming data from Twitter at two different times using the Python scripting language. One instance relates to the period before the death of the patient, and the other relates to the period after the death. We used SAS® Text Miner nodes to parse, filter, and analyze the data and to get a feel for the patterns that exist in the tweets. We then used SAS® Sentiment Analysis Studio to further analyze and predict the sentiment of the Ebola outbreak in the United States. In our results, we found that the issue was not taken very seriously until the death of the Ebola patient in Dallas. After the death, we found that prominent personalities across the globe were talking about the disease and then raised funds to fight it. We are continuing to collect tweets. We analyze the locations of the tweets to produce a heat map that corresponds to the intensity of the varying sentiment across locations.
Dheeraj Jami, Oklahoma State University
Goutam Chakraborty, Oklahoma State University
Shivkanth Lanka, Oklahoma State University
The merge is one of the SAS® programmer's most commonly used tools. However, it can be fraught with pitfalls to the unwary user. In this paper, we look under the hood of the DATA step and examine how the program data vector works. We see what's really happening when data sets are merged and how to avoid subtle problems.
Joshua Horstman, Nested Loop Consulting
You know that you want to control the process flow of your program. When your program is executed multiple times, with slight variations, you will need to control the changes from iteration to iteration, the timing of the execution, and the maintenance of output and logs. Unfortunately, in order to achieve the control that you know that you need to have, you will need to make frequent, possibly time-consuming and potentially error-prone, manual corrections and edits to your program. Fortunately, the control you seek is available and it does not require the use of time-intensive manual techniques. List processing techniques are available that give you control and peace of mind and enable you to be a successful control freak. These techniques are not new, but there is often hesitancy on the part of some programmers to take full advantage of them. This paper reviews these techniques and demonstrates them through a series of examples.
Mary Rosenbloom, Edwards Lifesciences, LLC
Art Carpenter, California Occidental Consultants
This presentation is an open-ended discussion about techniques for transferring data and analytical results from SAS® to Microsoft Excel. There are some introductory comments, but this presentation does not have any set content. Instead, the topics discussed are dictated by attendee questions. Come prepared to ask and get answers to your questions. To submit your questions or suggestions for discussion in advance, go to http://support.sas.com/surveys/askvince.html.
Vince DelGobbo, SAS
There is a goldmine of information that is available to you in SAS® metadata. The challenge, however, is being able to retrieve and leverage that information. While there is useful functionality available in SAS® Management Console as well as a collection of functional macros provided by SAS to help accomplish this, getting a complete metadata picture in an automated way has proven difficult. This paper discusses the methods we have used to find core information within SAS® 9.2 metadata and how we have been able to pull this information in a programmatic way. We used Base SAS®, SAS® Data Integration Studio, PC SAS®, and SAS® XML Mapper to build a solution that now provides daily metadata reporting about our SAS Data Integration Studio jobs, job flows, tables, and so on. This information can now be used for auditing purposes as well as for helping us build our full metadata inventory as we prepare to migrate to SAS® 9.4.
Rupinder Dhillon, Bell Canada
Rupinder Dhillon, Bell Canada
Darryl Prebble, Prebble Consulting Inc.
Whether you manage computer systems in a small-to-medium environment (for example, in labs, workshops, or corporate training groups) or in a large-scale deployment, the ability to automate SAS® 9.4 installations is important to the efficiency and success of your software deployments. For large-scale deployments, you can automate the installation process by using third-party provisioning software such as Microsoft System Center Configuration Manager (SCCM) or Symantec Altiris. But what if you have a small-to-medium environment and you do not have provisioning software to package deployment jobs? No worries! There is a solution. This paper presents a case study of just such a situation where a process was developed for SAS regional users groups (RUGs). Along with the case study, the paper offers a process for automating SAS 9.4 installations in workshop, lab, and corporate training (small-to-medium sized) environments. This process incorporates the new -srwonly option with the SAS® Deployment Wizard, deployment-wizard commands that use response files, and batch-file implementation. This combination results in easy automation of an installation, even without provisioning software.
Max Blake, SAS
Census data, such as education and income, has been extensively used for various purposes. The data is usually collected in percentages of census unit levels, based on the population sample. Such presentation of the data makes it hard to interpret and compare. A more convenient way of presenting the data is to use the geocoded percentage to produce counts for a pseudo-population. We developed a very flexible SAS® macro to automatically generate the descriptive summary tables for the census data as well as to conduct statistical tests to compare the different levels of the variable by groups. The SAS macro is not only useful for census data but can be used to generate summary tables for any data with percentages in multiple categories.
Janet Lee, Kaiser Permanente Southern California
SAS® Visual Analytics is deployed by many customers. IT departments are tasked with efficiently managing the server resources, achieving maximum usage of resources, optimizing availability, and managing costs. Business users expect the system to be available when needed and to perform to their expectations. Business executives who sponsor business intelligence (BI) and analytical projects like to see that their decision to support and finance the project meets business requirements. Business executives also like to know how different people in the organization are using SAS Visual Analytics. With the release of SAS Visual Analytics 7.1, new functionality is added to support the memory management of the SAS® LASR™ Analytic Server. Also, new out-of-the-box usage and audit reporting is introduced. This paper covers BI-on-BI for SAS Visual Analytics. Also, all the new functionality introduced for SAS Visual Analytics administration and questions about the resource management, data compression, and out-of-the-box usage reporting of SAS Visual Analytics are also discussed. Key product capabilities are demonstrated.
Murali Nori, SAS
We have to pull data from several data files in creating our working databases. The simplest use of SAS® hash objects greatly reduces the time required to draw data from many sources when compared to the use of multiple proc sorts and merges.
Andrew Dagis, City of Hope
Over the years, the SAS® Business Intelligence platform has proved its importance in this big data world with its suite of applications that enable us to efficiently process, analyze, and transform huge amounts of business data. Within the data warehouse universe, 'batch execution' sits in the heart of SAS Data Integration technologies. On a day-to-day basis, batches run, and the current status of the batch is generally sent out to the team or to the client as a 'static' e-mail or as a report. From experience, we know that they don't provide much insight into the real 'bits and bytes' of a batch run. Imagine if the status of the running batch is automatically captured in one central repository and is presented on a beautiful web browser on your computer or on your iPad. All this can be achieved without asking anybody to send reports and with all 'post-batch' queries being answered automatically with a click. This paper aims to answer the same with a framework that is designed specifically to automate the reporting aspects of SAS batches and, yes, it is all about collecting statistics of the batch, and we call it - 'BatchStats.'
Prajwal Shetty, Tesco HSC
As the SAS® platform becomes increasingly metadata-driven, it becomes increasingly important to get the structures and controls surrounding the metadata repository correct. This presentation aims to point out some of the considerations and potential pitfalls of working with the metadata infrastructure. It also suggests some solutions that have been used with the aim of making this process as simple as possible.
Paul Thomas, ASUP Ltd
The power of SAS®9 applications allows information and knowledge creation from very large amounts of data. Analysis that used to consist of 10s-100s of gigabytes (GBs) of supporting data has rapidly grown into the 10s to 100s of terabytes (TBs). This data expansion has resulted in more and larger SAS data stores. Setting up file systems to support these large volumes of data with adequate performance, as well as ensuring adequate storage space for the SAS® temporary files, can be very challenging. Technology advancements in storage and system virtualization, flash storage, and hybrid storage management require continual updating of best practices to configure I/O subsystems. This paper presents updated best practices for configuring the I/O subsystem for your SAS®9 applications, ensuring adequate capacity, bandwidth, and performance for your SAS®9 workloads. We have found that very few storage systems work ideally with SAS with their out-of-the-box settings, so it is important to convey these general guidelines.
Tony Brown, SAS
Margaret Crevar, SAS
We regularly speak with organizations running established SAS® 9.1.3 systems that have not yet upgraded to a later version of SAS®. Often this is because their current SAS 9.1.3 environment is working fine, and no compelling event to upgrade has materialized. Now that SAS 9.1.3 has moved to a lower level of support and some very exciting technologies (Hadoop, cloud, ever-better scalability) are more accessible than ever using SAS® 9.4, the case for migrating from SAS 9.1.3 is strong. Upgrading a large SAS ecosystem with multiple environments, an active development stream, and a busy production environment can seem daunting. This paper aims to demystify the process, suggesting outline migration approaches for a variety of the most common scenarios in SAS 9.1.3 to SAS 9.4 upgrades, and a scalable template project plan that has been proven at a range of organizations.
David Stern, SAS
You've worked for weeks or even months to produce an analysis suite for a project. Then, at the last moment, someone wants a subgroup analysis, and they inform you that they need it yesterday. This should be easy to do, right? So often, the programs that we write fall apart when we use them on subsets of the original data. This paper takes a look at some of the best practice techniques that can be built into a program at the beginning, so that users can subset on the fly without losing categories or creating errors in statistical tests. We review techniques for creating tables and corresponding titles with BY-group processing so that minimal code needs to be modified when more groups are created. And we provide a link to sample code and sample data that can be used to get started with this process.
Mary Rosenbloom, Edwards Lifesciences, LLC
Kirk Paul Lafler, Software Intelligence Corporation
SAS® provides a wealth of resources for creating useful, attractive metadata tables, including PROC CONTENTS listing output (to ODS destinations), the PROC CONTENTS OUT= SAS data set, and PROC CONTENTS ODS Output Objects. This paper and presentation explores some less well-known resources to create metadata such as %SYSFUNC, PROC DATASETS, and Dictionary Tables. All these options in conjunction with the use of the ExcelXP tagset (and, new in the second maintenance release for SAS® 9.4, the Excel tagset) enable the creation of multi-tab metadata workbooks at the click of a mouse.
Louise Hadden, Abt Associates Inc.
SAS® has been an early leader in big data technology architecture that more easily integrates unstructured files across multi-tier data system platforms. By using SAS® Data Integration Studio and SAS® Enterprise Business Intelligence software, you can easily automate big data using SAS® system accommodations for Hadoop open-source standards. At the same time, another seminal technology has emerged, which involves real-time multi-sensor data integration using Arduino microprocessors. This break-out session demonstrates the use of SAS® 9.4 coding to define Hadoop clusters and to automate Arduino data acquisition to convert custom unstructured log files into structured tables, which can be analyzed by SAS in near real time. Examples include the use of SAS Data Integration Studio to create and automate stored processes, as well as tips for C language object coding to integrate to SAS data management, with a simple temperature monitoring application for Hadoop to Arduino using SAS.
Keith Allan Jones PHD, QUALIMATIX.com
Your data analysis projects can leverage the new HYPERGROUP action to mine relationships using Graph Theory. Discover which data entities are related and, conversely, sets of values are disjoint. In cases when the sets of values are not completely disjoint, HYPERGROUP can identify data that is strongly connected and identify neighboring data is weakly connected, or data that is a greater distance away. Each record is assigned a hypergroup number, and within hypergroups a color, community, or both. The GROUPBY facility, WHERE clauses, or both, can act on hypergroup number, color, or community, to conduct analytics using data that is close , related or more relevant . The algorithms used to determine hypergroups are based on Graph Theory. We show how the results of Hypergroup allow equivalent graphs to be displayed, useful information to be seen, and aid you in controlling what data is required to perform your analytics. Crucial data structure can be unearthed and seen.
Yue Qi, SAS
Trevor Kearney, SAS
To bring order to the wild world of big data, EMC and its partners have joined forces to meet customer challenges and deliver a modern analytic architecture. This unified approach encompasses big data management, analytics discovery and deployment via end-to-end solutions that solve your big data problems. They are also designed to free up more time for innovation, deliver faster deployments, and help you find new insights from secure and properly managed data. The EMC Business Data Lake is a fully-engineered, enterprise-grade data lake built on a foundation of core data technologies. It provides pre-configured building blocks that enable self-service, end-to-end integration, management and provisioning of the entire big data environment. Major benefits include the ability to make more timely and informed business decisions and realize the vision of analytics in weeks instead of months.SAS enhances the Federation Business Data Lake by providing superior breadth and depth of analytics to tackle any big data analytics problem an organization might have, whether it's fraud detection, risk management, customer intelligence, predictive assets maintenance and others. SAS and EMC work together to deliver a robust and comprehensive big data solution with reduced risk, automated provisioning and configuration and is purpose-built for big data analytics workloads.
Casey James, EMC
Learn how a new product from SAS enables you to easily build and compare multiple candidate models for all your business segments.
Steve Sparano, SAS
Creating DataFlux® jobs that can be executed from job scheduling software can be challenging. This presentation guides participants through the creation of a template job that accepts an email distribution list, subject, email verbiage, and attachment file name as macro variables. It also demonstrates how to call this job from a command line.
Jeanne Estridge, Sinclair Community College
This presentation focuses on building a graph template in an easy-to-follow, step-by-step manner. The presentation begins with using Graph Template Language to re-create a simple series plot, and then moves on to include a secondary y-axis as well as multiple overlaid block plots to tell a more complex and complete story than would be possible using only the SGPLOT procedure.
Jed Teres, Verizon Wireless
What do going to the South Pole at the beginning of the 20th century, winning the 1980 gold medal in Olympic hockey, and delivering a successful project have in common? The answer: Good teams succeed when groups of highly talented individuals often do not. So, what is your Everest and how do you gather the right group to successfully scale it? Success often hinges on not just building a team, but on assembling the right team. Join Scott Sanders, a business and IT veteran who has effectively built, managed, and been part of successful teams throughout his 27-year career. Hear some of his best practices for how to put together a good team and keep them focused, engaged, and motivated to deliver a project.
Scott Sanders, Sears Holdings
So you are still writing SAS® DATA steps and SAS macros and running them through a command-line scheduler. When work comes in, there is only one person who knows that code, and they are out--what to do? This paper shows how SAS applies extract, transform, load (ETL) modernization techniques with SAS® Data Integration Studio to gain resource efficiencies and to break down the ETL black box. We are going to share the fundamentals (metadata foldering and naming standards) that ensure success, along with steps to ease into the pool while iteratively gaining benefits. Benefits include self-documenting code visualization, impact analysis on jobs and tables impacted by change, and being supportable by interchangeable bench resources. We conclude with demonstrating how SAS® Visual Analytics is being used to monitor service-level agreements and provide actionable insights into job-flow performance and scheduling.
Brandon Kirk, SAS
The DATA step enables you to read, write, and manipulate many types of data. As data evolves to a more free-form state, the ability of SAS® to handle character data becomes increasingly important. This presentation, expanded and enhanced from an earlier version, addresses character data from multiple vantage points. For example, what is the default length of a character string, and why does it appear to change under different circumstances? Special emphasis is given to the myriad functions that can facilitate the processing and manipulation of character data. This paper is targeted at a beginning to intermediate audience.
Andrew Kuligowski, HSN
Swati Agarwal, OptumInsight
With all the talk of 'big data' and 'visual analytics' we sometimes forget how important it is, and often how hard it is, to get external data into SAS®. In this paper, we review some common data sources such as delimited sources (for example, CSV), as well as structured flat files, and the programming steps needed to successfully load these files into SAS. In addition to examining the INFILE and INPUT statements, we look at some methods for dealing with bad data. This paper assumes only basic SAS skills, although the topic can be of interest to anyone who needs to read external files.
Peter Eberhardt, Fernwood Consulting Group Inc.
Audrey Yeo, Athlene
This session is an introduction to predictive analytics and causal analytics in the context of improving outcomes. The session covers the following topics: 1) Basic predictive analytics vs. causal analytics; 2) The causal analytics framework; 3) Testing whether the outcomes improve because of an intervention; 4) Targeting the cases that have the best improvement in outcomes because of an intervention; and 5) Tweaking an intervention in a way that improves outcomes further.
Jason Pieratt, Humana
Data analysis begins with cleaning up data, calculating descriptive statistics, and examining variable distributions. Before more rigorous statistical analysis begins, many statisticians perform basic inferential statistical tests such as chi-square and t tests to assess unadjusted associations. These tests help guide the direction of the more rigorous statistical analysis. How to perform chi-square and t tests is presented. We explain how to interpret the output and where to look for the association or difference based on the hypothesis being tested. We propose the next steps for further analysis using example data.
Maribeth Johnson, Georgia Regents University
Jennifer Waller, Georgia Regents University
In Bayesian statistics, Markov chain Monte Carlo (MCMC) algorithms are an essential tool for sampling from probability distributions. PROC MCMC is useful for these algorithms. However, it is often desirable to code an algorithm from scratch. This is especially present in academia where students are expected to be able to understand and code an MCMC. The ability of SAS® to accomplish this is relatively unknown yet quite straightforward. We use SAS/IML® to demonstrate methods for coding an MCMC algorithm with examples of a Gibbs sampler and Metropolis-Hastings random walk.
Chelsea Lofland, University of California Santa Cruz
There is a plethora of uses of the colon (:) in SAS® programming. The colon is used as a data or variable name wild-card, a macro variable creator, an operator modifier, and so forth. The colon helps you write clear, concise, and compact code. The main objective of this paper is to encourage the effective use of the colon in writing crisp code. This paper presents real-time applications of the colon in day-to-day programming. In addition, this paper presents cases where the colon limits programmers' wishes.
Jinson Erinjeri, EMMES Corporation
Survey research can provide a straightforward and effective means of collecting input on a range of topics. Survey researchers often like to group similar survey items into construct domains in order to make generalizations about a particular area of interest. Confirmatory Factor Analysis is used to test whether this pre-existing theoretical model underlies a particular set of responses to survey questions. Based on Structural Equation Modeling (SEM), Confirmatory Factor Analysis provides the survey researcher with a means to evaluate how well the actual survey response data fits within the a priori model specified by subject matter experts. PROC CALIS now provides survey researchers the ability to perform Confirmatory Factor Analysis using SAS®. This paper provides a survey researcher with the steps needed to complete Confirmatory Factor Analysis using SAS. We discuss and demonstrate the options available to survey researchers in the handling of missing and not applicable survey responses using an ARRAY statement within a DATA step and imputation of item non-response. A simple demonstration of PROC CALIS is then provided with interpretation of key portions of the SAS output. Using recommendations provided by SAS from the PROC CALIS output, the analysis is then modified to provide a better fit of survey items into survey domains.
Lindsey Brown Philpot, Baylor Scott & White Health
Sunni Barnes, Baylor Scott&White Health
Crystal Carel, BaylorScott&White Health Care System
In previous papers I have described how many standard SAS/GRAPH® plots can be converted easily to ODS Graphics by using simple PROC SGPLOT or SGPANEL code. SAS/GRAPH Annotate code would appear, at first sight, to be much more difficult to convert to ODS Graphics, but by using its layering features, many Annotate plots can be replicated in a more flexible and repeatable way. This paper explains how to convert many of your Annotate plots, so they can be reproduced using Base SAS®.
Philip Holland, Holland Numerics Limited
This presentation explains how to use Base SAS®9 software to create multi-sheet Excel workbooks. You learn step-by-step techniques for quickly and easily creating attractive multi-sheet Excel workbooks that contain your SAS® output using the ExcelXP Output Delivery System (ODS) tagset. The techniques can be used regardless of the platform on which SAS software is installed. You can even use them on a mainframe! Creating and delivering your workbooks on-demand and in real time using SAS server technology is discussed. Although the title is similar to previous presentations by this author, this presentation contains new and revised material not previously presented.
Vince DelGobbo, SAS
With the expansive new features in SAS® Visual Analytics 7.1, you can now take control of the graph data while viewing a report. Using parameterized expressions, calculated items, custom categories, and prompt controls, you can now change the measures or categories on a graph from a mobile device or web viewer. View your data from different perspectives while using the same graph. This paper demonstrates how you can use these features in SAS® Visual Analytics Designer to create reports in which graph roles can be dynamically changed with the click of a button.
Kenny Lui, SAS
The Centers for Disease Control and Prevention (CDC) went through a large migration from a mainframe to a Windows platform. This e-poster will highlight the Data Automated Transfer Utility (DATU) that was developed to migrate historic files between the two file systems using SAS® macros and SAS/CONNECT®. We will demonstrate how this program identifies the type of file, transfers the file appropriately, verifies the successful transfer, and provides the details in a Microsoft Excel report. SAS/CONNECT code, special system options, and mainframe code will be shown. In 2009, the CDC made the decision to retire a mainframe that was used for years of primarily SAS work. The replacement platform is a SAS grid system, based on Windows, which is referred to as the Consolidated Statistical Platform (CSP). The change from mainframe to Windows required the migration of over a hundred thousand files totaling approximately 20 terabytes. To minimize countless man hours and human error, an automated solution was developed. DATU was developed for users to migrate their files from the mainframe to the new Windows CSP or other Windows destinations. Approximately 95% of the files on the CDC mainframe were one of three file types: SAS data sets, sequential text files, and partitioned data sets (PDS) libraries. DATU dynamically determines the file type and uses the appropriate method to transfer the file to the assigned Windows destination. Variations of files are detected and handled appropriately. File variations include multiple SAS versions of SAS data sets and sequential files that contain binary values such as packed decimal fields. To mitigate the loss of numeric precision during the migration, SAS numeric variables are identified and promoted to account for architectural differences between mainframe and Windows platforms. To aid users in verifying the accuracy of the file transfer, the program compares file information of the source and destination files. When a SAS file is d
ownloaded, PROC CONTENTS is run on both files, and the PROC CONTENTS output is compared. For sequential text files, a checksum is generated for both files and the checksum file is compared. A PDS file transfer creates a list of the members in the PDS and destination Windows folder, and the file lists are compared. The development of this program and the file migration was a daunting task. This paper will share some of our lessons learned along the way and the method of our implementation.
Jim Brittain, National Center for Health Statistics (CDC)
Robert Schwartz, National Centers for Disease Control and Prevention
The DATA Step has served SAS® programmers well over the years, and although it is handy, the new, exciting, and powerful DS2 is a significant alternative to the DATA Step by introducing an object-oriented programming environment. It enables users to effectively manipulate complex data and efficiently manage the programming through additional data types, programming structure elements, user-defined methods, and shareable packages, as well as threaded execution. This tutorial is developed based on our experiences with getting started with DS2 and learning to use it to access, manage, and share data in a scalable and standards-based way. It facilitates SAS users of all levels to easily get started with DS2 and understand its basic functionality by practicing the features of DS2.
Peter Eberhardt, Fernwood Consulting Group Inc.
Xue Yao, Winnipeg Regional Health Aurthority
Texas is one of about 30 states that has recently passed laws requiring voters to produce valid IDs in an effort to avoid illegal voters. This new regulation, however, might negatively affect voting opportunities for students, low-income people, and minorities. To determine the actual effects of the regulation in Dallas County, voters were surveyed when exiting the polling offices during the November midterm election about difficulties that they might have encountered in the voting process. The database of the voting history of each registered voter in the county was examined, and the data set was converted into an analyzable structure prior to stratification. All of the polling offices were stratified by the residents' degrees of involvement in the past three general elections, namely, the proportion of people who have used early election and who have at least voted once. A two-phase sampling design was adopted for stratification. On election day, pollsters were sent to select polling offices and interviewed 20 voters at a selected time period. The number of people having difficulties was estimated when data was collected.
Yusun Xia, Southern Methodist University
Soon after the advent of the SAS® hash object in SAS® 9.0, its early adopters realized that the potential functionality of the new structure is much broader than basic 0(1)-time lookup and file matching. Specifically, they went on to invent methods of data aggregation based on the ability of the hash object to quickly store and update key summary information. They also demonstrated that the DATA step aggregation using the hash object offered significantly lower run time and memory utilization compared to the SUMMARY/MEANS or SQL procedures, coupled with the possibility of eliminating the need to write the aggregation results to interim data files and the programming flexibility that allowed them to combine sophisticated data manipulation and adjustments of the aggregates within a single step. Such developments within the SAS user community did not go unnoticed by SAS R&D, and for SAS® 9.2 the hash object had been enriched with tag parameters and methods specifically designed to handle aggregation without the need to write the summarized data to the PDV host variable and update the hash table with new key summaries, thus further improving run-time performance. As more SAS programmers applied these methods in their real-world practice, they developed aggregation techniques fit to various programmatic scenarios and ideas for handling the hash object memory limitations in situations calling for truly enormous hash tables. This paper presents a review of the DATA step aggregation methods and techniques using the hash object. The presentation is intended for all situations in which the final SAS code is either a straight Base SAS DATA step or a DATA step generated by any other SAS product.
Paul Dorfman, Dorfman Consukting
Don Henderson, Henderson Consulting Services
Many users would like to check the quality of data after the data integration process has loaded the data into a data set or table. The approach in this paper shows users how to develop a process that scores columns based on rules judged against a set of standards set by the user. Each rule has a standard that determines whether it passes, fails, or needs review (a green, red, or yellow score). A rule can be as simple as: Is the value for this column missing, or is this column within a valid range? Further, it includes comparing a column to one or more other columns, or checking for specific invalid entries. It also includes rules that compare a column value to a lookup table to determine whether the value is in the lookup table. Users can create their own rules and each column can have any number of rules. For example, a rule can be created to measure a dollar column to a range of acceptable values. The user can determine that it is expected that up to two percent of the values are allowed to be out of range. If two to five percent of the values are out of range, then data should be reviewed. And, if over five percent of the values are out of range, the data is not acceptable. The entire table has a color-coded scorecard showing each rule and its score. Summary reports show columns by score and distributions of key columns. The scorecard enables the user to quickly assess whether the SAS data set is acceptable, or whether specific columns need to be reviewed. Drill-down reports enable the user to drill into the data to examine why the column scored as it did. Based on the scores, the data set can be accepted or rejected, and the user will know where and why the data set failed. The process can store each scorecard data in a data mart. This data mart enables the user to review the quality of their data over time. It can answer questions such as: is the quality of the data improving overall? Are there specific columns that are improving or declining over time? What can we do to improve the qu
ality of our data? This scorecard is not intended to replace the quality control of the data integration or ETL process. It is a supplement to the ETL process. The programs are written using only Base SAS® and Output Delivery System (ODS), macro variables, and formats. This presentation shows how to: (1) use ODS HTML; (2) color code cells with the use of formats; (3) use formats as lookup tables; (4) use INCLUDE statements to make use of template code snippets to simplify programming; and (5) use hyperlinks to launch stored processes from the scorecard.
Tom Purvis, Qualex Consulting Services, Inc.
A common problem when developing classifications models is the imbalance of classes in the classification variable. This imbalance means that a class is represented by a large number of cases while the other class is represented by very few. When this happens, the predictive power of the developed model could be biased. This is the case because classification methods tend to favor the majority class. And the classification methods are designed to minimize the error on the total data set regardless of the proportions or balance of the classes. Due to this problem, there are several techniques used to balance the distribution of the classification variable. One method is to reduce the size of the majority class (under-sampling), another is to increase the number of cases in the minority class (over-sampling); or a third method is to combine these two methods. There is also a more complex technique called SMOTE (Synthetic Minority Over-sampling Technique) that consists of intelligently generating new synthetic registers of the minority class using a closest-neighbors approach. In this paper, we present the development in SAS® of a combination of SMOTE and under-sampling techniques as applied to a churn model. Then, we compare the predictive power of the model using this proposed balancing technique against other models developed with different data sampling techniques.
Lina Maria Guzman Cartagena, DIRECTV
Graduate students often need to explore data and summarize multiple statistical models into tables for a dissertation. The challenges of data summarization include coding multiple, similar statistical models, and summarizing these models into meaningful tables for review. The default method is to type (or copy and paste) results into tables. This often takes longer than creating and running the analyses. Students might spend hours creating tables, only to have to start over when a change or correction in the underlying data requires the analyses to be updated. This paper gives graduate students the tools to efficiently summarize the results of statistical models in tables. These tools include a macro-based SAS/STAT® analysis and ODS OUTPUT statement to summarize statistics into meaningful tables. Specifically, we summarize PROC GLM and PROC LOGISTIC output. We convert an analysis of hospital-acquired delirium from hundreds of pages of output into three formatted Microsoft Excel files. This paper is appropriate for users familiar with basic macro language.
Elisa Priest, Texas A&M University Health Science Center
Ashley Collinsworth, Baylor Scott & White Health/Tulane University
Deep Learning is one of the most exciting research areas in machine learning today. While Deep Learning algorithms are typically very sophisticated, you may be surprised how much you can understand about the field with just a basic knowledge of neural networks. Come learn the fundamentals of this exciting new area and see some of SAS' newest technologies for neural networks.
Patrick Hall, SAS
As SAS® programmers and statisticians, we rarely write programs that are run only once and then set aside. Instead, we are often asked to develop programs very early in a project, on immature data, following specifications that may be little more than a guess as to what the data is supposed to look like. These programs will then be run repeatedly on periodically updated data through the duration of the project. This paper offers strategies for not only making those programs more flexible, so they can handle some of the more commonly encountered variations in that data, but also for setting traps to identify unexpected data points that require further investigation. We will also touch upon some good programming practices that can benefit both the original programmer and others who might have to touch the code. In this paper, we will provide explicit examples of defensive coding that will aid in kicking the tires, pumping the breaks, checking your blind spots, and merging ahead for quality programming from the beginning.
Donna Levy, Inventiv Health Clinical
Nancy Brucken, inVentiv Health Clinical
Using geocoded addresses from FDIC Summary of Deposits data with Census geospatial data including TIGER boundary files and population-weighted centroid shapefiles, we were able to calculate a reasonable distance threshold by metropolitan statistical area (MSA) (or metropolitan division, where applicable (MD)) through a series of SAS® DATA steps and SQL joins. We first used the Cartesian join with PROC SQL on the data set containing population-weighted centroid coordinates. (The data set contained geocoded coordinates of approximately 91,000 full-service bank branches.) Using the GEODIST function in SAS, we were able to calculate the distance to the nearest bank branch from the population-weighted centroid of each Census tract. The tract data set was then grouped by MSA/MD and sorted in ascending order within each grouping (using the RETAIN function) by distance to the nearest bank branch. We calculated the cumulative population and cumulative population percent for each MSA/MD. The reasonable threshold distance is established where cumulative population percent is closest (in either direction +/-) to 90%.
Sarah Campbell, Federal Deposit Insurance Corporation
Intervals have been a feature of Base SAS® for a long time, enabling SAS users to work with commonly (and not-so-commonly) defined periods of time such as years, months, and quarters. With the release of SAS®9, there are more options and capabilities for intervals and their functions. This paper first discusses the basics of intervals in detail, and then discusses several of the enhancements to the interval feature, such as the ability to select how the INTCK function defines interval boundaries and the ability to create your own custom intervals beyond multipliers and shift operators.
Derek Morgan
Analyzing the key success factors for hit songs in the Billboard music charts is an ongoing area of interest to the music industry. Although there have been many studies over the past decades on predicting whether a song has the potential to become a hit song, the following research question remains, Can hit songs be predicted? And, if the answer is yes, what are the characteristics of those hit songs? This study applies data mining techniques using SAS® Enterprise Miner™ to understand why some music is more popular than other music. In particular, certain songs are considered one-hit wonders, which are in the Billboard music charts only once. Meanwhile, other songs are acknowledged as masterpieces. With 2,139 data records, the results demonstrate the practical validity of our approach.
Piboon Banpotsakun, National Institute of Development Administration
Jongsawas Chongwatpol, NIDA Business School, National Institute of Development Administration
We have many options for performing merges or joins these days, each with various advantages and disadvantages. Depending on how you perform your joins, different checks can help you verify whether the join was successful. In this presentation, we look at some sample data, use different methods, and see what kinds of tests can be done to ensure that the results are correct. If the join is performed with PROC SQL and two criteria are fulfilled (the number of observations in the primary data set has not changed [presuming a one-to-one or many-to-one situation], and a variable that should be populated is not missing), then the merge was successful.
Emmy Pahmer, inVentiv Health
Data scientists and analytic practitioners have become obsessed with quantifying the unknown. Through text mining third-person posthumous narratives in SAS® Enterprise Miner™ 12.1, we measured tangible aspects of personalities based on the broadly accepted big-five characteristics: extraversion, agreeableness, conscientiousness, neuroticism, and openness. These measurable attributes are linked to common descriptive terms used throughout our data to establish statistical relationships. The data set contains over 1,000 obituaries from newspapers throughout the United States, with individuals who vary in age, gender, demographic, and socio-economic circumstances. In our study, we leveraged existing literature to build the ontology used in the analysis. This literature suggests that a third person's perspective gives insight into one's personality, solidifying the use of obituaries as a source for analysis. We statistically linked target topics such as career, education, religion, art, and family to the five characteristics. With these taxonomies, we developed multivariate models in order to assign scores to predict an individual's personality type. With a trained model, this study has implications for predicting an individual's personality, allowing for better decisions on human capital deployment. Even outside the traditional application of personality assessment for organizational behavior, the methods used to extract intangible characteristics from text enables us to identify valuable information across multiple industries and disciplines.
Mark Schneider, Deloitte & Touche
Andrew Van Der Werff, Deloitte & Touche, LLP
Being able to split SAS® processing over multiple SAS processers on a single machine or over multiple machines running SAS, as in the case of SAS® Grid Manager, enables you to get more done in less time. This paper looks at the methods of using SAS/CONNECT® to process SAS code in parallel, including the SAS statements, macros, and PROCs available to make this processing easier for the SAS programmer. SAS products that automatically generate parallel code are also highlighted.
Doug Haigh, SAS
Your project ended a year ago, and now you need to explain what you did, or rerun some of your code. Can you remember the process? Can you describe what you did? Can you even find those programs? Visually presented here are examples of tools and techniques that follow best practices to help us, as programmers, manage the flow of information from source data to a final product.
Elizabeth Axelrod, Abt Associates Inc.
This paper explains best practices for using temporary files in SAS® programs. These practices include using the TEMP access method, writing to the WORK directory, and ensuring that you leave no litter files behind. An additional special temporary file technique is described for mainframe users.
Rick Langston, SAS
Most manuscripts in medical journals contain summary tables that combine simple summaries and between-group comparisons. These tables typically combine estimates for categorical and continuous variables. The statistician generally summarizes the data using the FREQ procedure for categorical variables and compares percentages between groups using a chi-square or a Fisher's exact test. For continuous variables, the MEANS procedure is used to summarize data as either means and standard deviation or medians and quartiles. Then these statistics are generally compared between groups by using the GLM procedure or NPAR1WAY procedure, depending on whether one is interested in a parametric test or a non-parametric test. The outputs from these different procedures are then combined and presented in a concise format ready for publications. Currently there is no straightforward way in SAS® to build these tables in a presentable format that can then be customized to individual tastes. In this paper, we focus on presenting summary statistics and results from comparing categorical variables between two or more independent groups. The macro takes the dataset, the number of treatment groups, and the type of test (either chi-square or Fisher's exact) as input and presents the results in a publication-ready table. This macro automates summarizing data to a certain extent and minimizes risky typographical errors when copying results or typing them into a table.
Jeff Gossett, University of Arkansas for Medical Sciences
Mallikarjuna Rettiganti, UAMS
It has always been a million-dollar question, What inhibits a donor to donate? Many successful universities have deep roots in annual giving. We know donor sentiment is a key factor in drawing attention to engage donors. This paper is a summary of findings about donor behaviors using textual analysis combined with the power of predictive modeling. In addition to identifying the characteristics of donors, the paper focuses on identifying the characteristics of a first-time donor. It distinguishes the features of the first-time donor from the general donor pattern. It leverages the variations in data to provide deeper insights into behavioral patterns. A data set containing 247,000 records was obtained from the XYZ University Foundation alumni database, Facebook, and Twitter. Solicitation content such as email subject lines sent to the prospect base was considered. Time-dependent data and time-independent data were categorized to make unbiased predictions about the first-time donor. The predictive models use input such as age, educational records, scholarships, events, student memberships, and solicitation methods. Models such as decision trees, Dmine regression, and neural networks were built to predict the prospects. SAS® Sentiment Analysis Studio and SAS® Enterprise Miner™ were used to analyze the sentiment.
Ramcharan Kakarla, Comcast
Goutam Chakraborty, Oklahoma State University
The purpose of this paper is to introduce a SAS® macro named %DOUBLEGLM that enables users to model the mean and dispersion jointly using double generalized linear models described in Nelder (1991) and Lee (1998). The R functions FITJOINT and DGLM (R Development Core Team, 2011) were used to verify the suitability of the %DOUBLEGLM macro estimates. The results showed that estimates were closer than the R functions.
Paulo Silva, Universidade de Brasilia
Alan Silva, Universidade de Brasilia
Programming SAS® has just been made easier, now that SAS 9.4 has incorporated the Lua programming language into the heart of the SAS System. With its elegant syntax, modern design, and support for data structures, Lua offers you a fresh way to write SAS programs, getting you past many of the limitations of the SAS macro language. This paper shows how you can get started using Lua to drive SAS, via a quick introduction to Lua and a tour through some of the features of the Lua and SAS combination that make SAS programming easier. SAS macro programming is also compared with Lua, so that you can decide where you might benefit most by using either language.
Paul Tomas, SAS
Dynamic, interactive visual displays known as dashboards are most effective when they show essential graphs, tables, statistics, and other information where data is the star. The first rule for creating an effective dashboard is to keep it simple. Striking a balance between content and style, a dashboard should be devoid of excessive clutter so as not to distract from and obscure the information displayed. The second rule of effective dashboard design involves displaying data that meets one or more business or organizational objectives. To accomplish this, the elements in a dashboard should convey a format easily understood by its intended audience. Attendees learn how to create dynamic, interactive user- and data-driven dashboards, graphical and table-driven dashboards, statistical dashboards, and drill-down dashboards with a purpose.
Kirk Paul Lafler, Software Intelligence Corporation
With the latest release of SAS® Business Rules Manager, decision-making using SAS® Stored Processes is now easier with simplified deployment via a web service for integration with your applications and business processes. This paper shows you how a user can publish analytics and rules as SOAP-based web services, track its usage, and dynamically update these decisions using SAS Business Rules Manager. In addition, we demonstrate how to integrate with SAS® Model Manager using SAS® Workflow to demonstrate how your other SAS® applications and solutions can also simplify real-time decision-making through business rules.
Lori Small, SAS
Chris Upton, SAS
Do you need to deliver business insight and analytics to support decision-making? Using SAS® Enterprise Guide®, you can access the full power of SAS® for analytics, without needing to learn the details of SAS programming. This presentation focuses on the following uses of SAS Enterprise Guide: Exploring and understanding--getting a feel for your data and for its issues and anomalies Visualizing--looking at the relationships, trends, surprises Consolidating--starting to piece together the story Presenting--building the insight and analytics into a presentation using SAS Enterprise Guide
Marje Fecht, Prowerk Consulting
Programmers can create keyboard macros to perform common editing tasks in SAS® Enterprise Guide®. This paper introduces how to record keystrokes, save a keyboard macro, edit the commands, and assign a shortcut key. Sample keyboard macros are included. Techniques to share keyboard macros are also covered.
Christopher Bost, MDRC
For the Research Data Centers (RDCs) of the United States Census Bureau, the demand for disk space substantially increases with each passing year. Efficiently using the SAS® data view might successfully address the concern about disk space challenges within the RDCs. This paper discusses the usage and benefits of the SAS data view to save disk space and reduce the time and effort required to manage large data sets. The ability and efficiency of the SAS data view to process regular ASCII, compressed ASCII, and other commonly used file formats are analyzed and evaluated in detail. The authors discuss ways in which using SAS data views is more efficient than the traditional methods in processing and deploying the large census and survey data in the RDCs.
Shigui Weng, US Bureau of the Census
Shy Degrace, US BUREAU OF THE CENSUS
Ya Jiun Tsai, US BUREAU OF THE CENSUS
The use of Bayesian methods has become increasingly popular in modern statistical analysis, with applications in numerous scientific fields. In recent releases, SAS® software has provided a wealth of tools for Bayesian analysis, with convenient access through several popular procedures in addition to the MCMC procedure, which is designed for general Bayesian modeling. This paper introduces the principles of Bayesian inference and reviews the steps in a Bayesian analysis. It then uses examples from the GENMOD and PHREG procedures to describe the built-in Bayesian capabilities, which became available for all platforms in SAS/STAT® 9.3. Discussion includes how to specify prior distributions, evaluate convergence diagnostics, and interpret the posterior summary statistics.
Maura Stokes, SAS
As many leadership experts suggest, growth happens only when it is intentional. Growth is vital to the immediate and long-term success of our employees as well as our employers. As SAS® programming leaders, we have a responsibility to encourage individual growth and to provide the opportunity for it. With an increased workload yet fewer resources, initial and ongoing training seem to be deemphasized as we are pressured to meet project timelines. The current workforce continues to evolve with time and technology. More important than simply providing the opportunity for training, individual trainees need the motivation for any training program to be successful. Although many existing principles for growth remain true, how such principles are applied needs to evolve with the current generation of SAS programmers. The primary goal of this poster is to identify the critical components that we feel are necessary for the development of an effective training program, one that meets the individual needs of the current workforce. Rather than proposing a single 12-step program that works for everyone, we think that identifying key components for enhancing the existing training infrastructure is a step in the right direction.
Amber Randall, Axio Research
Proper management of master data is a critical component of any enterprise information system. However, effective master data management (MDM) requires that both IT and Business understand the life cycle of master data and the fundamental principles of entity resolution (ER). This presentation provides a high-level overview of current practices in data matching, record linking, and entity information life cycle management that are foundational to building an effective strategy to improve data integration and MDM. Particular areas of focus are: 1) The need for ongoing ER analytics--the systematic and quantitative measurement of ER performance; 2) Investing in clerical review and asserted resolution for continuous improvement; and 3) Addressing the large-scale ER challenge through distributed processing.
John Talburt, Black Oak Analytics, Inc
My SAS® Global Forum 2013 paper 'Variable Reduction in SAS® by Using Weight of Evidence (WOE) and Information Value (IV)' has become the most sought-after online article on variable reduction in SAS since its publication. But the methodology provided by the paper is limited to reduction of numeric variables for logistic regression only. Built on a similar process, the current paper adds several major enhancements: 1) The use of WOE and IV has been expanded to the analytics and modeling for continuous dependent variables. After the standardization of a continuous outcome, all records can be divided into two groups: positive performance (outcome y above sample average) and negative performance (outcome y below sample average). This treatment is rigorously consistent with the concept of entropy in Information Theory: the juxtaposition of two opposite forces in one equation, and a stronger contrast between the two suggests a higher intensity , that is, more information delivered by the variable in question. As the standardization keeps the outcome variable continuous and quantified, the revised formulas for WOE and IV can be used in the analytics and modeling for continuous outcomes such as sales volume, claim amount, and so on. 2) Categorical and ordinal variables can be assessed together with numeric ones. 3) Users of big data usually need to evaluate hundreds or thousands of variables, but it is not uncommon that over 90% of variables contain little useful information. We have added a SAS macro that trims these variables efficiently in a broad-brushed manner without a thorough examination. Afterward, we examine the retained variables more carefully on their behaviors to the target outcome. 4) We add Chi-Square analysis for categorical/ordinal variables and Gini coefficients for numeric variable in order to provide additional suggestions for segmentation and regression. With the above enhancements added, a SAS macro program is provided at the end of the paper as a
complete suite for variable reduction/selection that efficiently evaluates all variables together. The paper provides a detailed explanation for how to use the SAS macro and how to read the SAS outputs that provide useful insights for subsequent linear regression, logistic regression, or scorecard development.
Alec Zhixiao Lin, PayPal Credit
Proving difference is the point of most statistical testing. In contrast, the point of equivalence and noninferiority tests is to prove that results are substantially the same, or at least not appreciably worse. An equivalence test can show that a new treatment, one that is less expensive or causes fewer side effects, can replace a standard treatment. A noninferiority test can show that a faster manufacturing process creates no more product defects or industrial waste than the standard process. This paper reviews familiar and new methods for planning and analyzing equivalence and noninferiority studies in the POWER, TTEST, and FREQ procedures in SAS/STAT® software. Techniques that are discussed range from Schuirmann's classic method of two one-sided tests (TOST) for demonstrating similar normal or lognormal means in bioequivalence studies, to Farrington and Manning's noninferiority score test for showing that an incidence rate (such as a rate of mortality, side effects, or product defects) is no worse. Real-world examples from clinical trials, drug development, and industrial process design are included.
John Castelloe, SAS
Donna Watts, SAS
When faced with a difficult data reduction problem, a SAS® programmer has many options for how to solve the problem. In this presentation, three different methods are reviewed and compared in terms of processing time, debugging, and ease of understanding. The three methods include linearizing the data, using SQL Cartesian joins, and using sequential data processing. Inconsistencies in the raw data caused the data linearization to be problematic. The large number of records and the need for many-to-many merges resulted in a long run time for the SQL code. The sequential data processing, although older technology, provided the most time efficient and error-free results.
Carry Croghan, US-EPA
Design of experiments (DOE) is an essential component of laboratory, greenhouse, and field research in the natural sciences. It has also been an integral part of scientific inquiry in diverse social science fields such as education, psychology, marketing, pricing, and social works. The principle and practices of DOE are among the oldest and the most advanced tools within the realm of statistics. DOE classification schemes, however, are diverse and, at times, confusing. In this presentation, we provide a simple conceptual classification framework in which experimental methods are grouped into classical and statistical approaches. The classical approach is further divided into pre-, quasi-, and true-experiments. The statistical approach is divided into one, two, and more than two factor experiments. Within these broad categories, we review several contemporary and widely used designs and their applications. The optimal use of Base SAS® and SAS/STAT® to analyze, summarize, and report these diverse designs is demonstrated. The prospects and challenges of such diverse and critically important analytics tools on business insight extraction in marketing and pricing research are discussed.
Max Friedauer
Jason Greenfield, Cardinal Health
Yuhan Jia, Cardinal Health
Joseph Thurman, Cardinal Health
Geospatial analysis plays an important role for data visualization in Business Intelligence (BI). The use of geospatial data with business data on maps, provides a visual context to understanding the business patterns which are influence by location sensitive information. When analytics like correlation, forecasting, decision trees are integrated with the location based data, it creates new business insights that are not common in a traditional BI applications. The advanced analytics of SAS offered in SAS Visual Analytics and SAS Visual Statistics pushes the limits of a common mapping usage seen in BI applications. SAS is working on a new level of integration with ESRI which is a leader in Geospatial analytics. The new features through this integration will bring the best of both technologies and provide new insights to business analyst and BI customers. This session will host a demo of the new features that will be seen in the future release of SAS Visual Analytics.
Murali Nori, SAS
Building and maintaining a data warehouse can require a complex series of jobs. Having an ETL flow that is reliable and well integrated is one big challenge. An ETL process might need some pre- and post-processing operations on the database to be well integrated and reliable. Some might handle this via maintenance windows. Others like us might generate custom transformations to be included in SAS® Data Integration Studio jobs. Custom transformations in SAS Data Integration Studio can be used to speed ETL process flows and reduce the database administrator's intervention after ETL flows are complete. In this paper, we demonstrate the use of custom transformations in SAS Data Integration Studio jobs to handle database-specific tasks for improving process efficiency and reliability in ETL flows.
Emre Saricicek, University of North Carolina at Chapel Hill
Dean Huff, UNC
From large holding companies with multiple subsidiaries to loosely affiliated state educational institutions, security domains are being federated to enable users from one domain to access applications in other domains and ultimately save money on software costs through sharing. Rather than rely on centralized security, applications must accept claims-based authentication from trusted authorities and support open standards such as Security Assertion Markup Language (SAML) instead of proprietary security protocols. This paper introduces SAML 2.0 and explains how the open source SAML implementation known as Shibboleth can be integrated with the SAS® 9.4 security architecture to support SAML. It then describes in detail how to set up Microsoft Active Directory Federation Services (AD FS) as the SAML Identity Provider, how to set up the SAS middle tier as the relying party, and how to troubleshoot problems.
Mike Roda, SAS
As organizations strive to do more with fewer resources, many modernize their disparate PC operations to centralized server deployments. Administrators and users share many concerns about using SAS® on a Microsoft Windows server. This paper outlines key guidelines, plus architecture and performance considerations, that are essential to making a successful transition from PC to server. This paper outlines the five key considerations for SAS customers who will change their configuration from PC-based SAS to using SAS on a Windows server: 1) Data and directory references; 2) Interactive and surrounding applications; 3) Usability; 4) Performance; 5) SAS Metadata Server.
Kate Schwarz, SAS
Donna Bennett, SAS
Margaret Crevar, SAS
SAS® Enterprise Guide® is a great interface for businesses running SAS® in a shared server environment. However, interacting with the shared server outside of SAS can require costly third-party software and knowledge of specific server programming languages. This can create a barrier between the SAS program and the server, which can be frustrating for even the best SAS programmers. This paper reviews the X and SYSTASK commands and creates a template of SAS code to pass commands from SAS to the server. By writing the server log to a text file, we demonstrate how to display critical server information in the code results. Using macros and the prompt functionality of SAS Enterprise Guide, we form stored procedures, allowing SAS users of all skill levels to interact with the server environment. These stored procedures can improve programming efficiency by providing a quick in-program solution to complete common server tasks such as copying folders or changing file permissions. They might also reduce the need for third-party programs to communicate with the server, which could potentially reduce software costs.
Cody Murray, Medica Health Plans
Chad Stegeman, Medica
Are you looking to track changes to your SAS® programs? Do you wish you could easily find errors, warnings, and notes in your SAS logs? Looking for a convenient way to find point-and-click tasks? Want to search your SAS® Enterprise Guide® project? How about a point-and-click way to view SAS system options and SAS macro variables? Or perhaps you want to upload data to the SAS® LASR™ Analytics Server, view SAS® Visual Analytics reports, or run SAS® Studio tasks, all from within SAS Enterprise Guide? You can find these capabilities and more in SAS Enterprise Guide. Knowing what tools are at your disposal and how to use them will put you a step ahead of the rest. Come learn about some of the newer features in SAS Enterprise Guide 7.1 and how you can leverage them in your work.
Casey Smith, SAS
The PROPCASE function is useful when you are cleansing a database of names and addresses in preparation for mailing. But it does not know the difference between a proper name (in which initial capitalization should be used) and an acronym (which should be all uppercase). This paper explains an algorithm that determines with reasonable accuracy whether a word is an acronym and, if it is, converts it to uppercase.
Joe DeShon, Boehringer Ingelheim Vetmedica
A powerful tool for visually analyzing regression analysis is the forest plot. Model estimates, ratios, and rates with confidence limits are graphically stacked vertically in order to show how they overlap with each other and to show values of significance. The ability to see whether two values are significantly different from each other or whether a covariate has a significant meaning on its own is made much simpler in a forest plot rather than sifting through numbers in a report table. The amount of data preparation needed in order to build a high-quality forest plot in SAS® can be tremendous because the programmer needs to run analyses, extract the estimates to be plotted, structure the estimates in a format conducive to generating a forest plot, and then run the correct plotting procedure or create a graph template using the Graph Template Language (GTL). While some SAS procedures can produce forest plots using Output Delivery System (ODS) Graphics automatically, the plots are not generally publication-ready and are difficult to customize even if the programmer is familiar with GTL. The macro %FORESTPLOT is designed to perform all of the steps of building a high-quality forest plot in order to save time for both experienced and inexperienced programmers, and is currently set up to perform regression analyses common to the clinical oncology research areas, Cox proportional hazards and logistic, as well as calculate Kaplan-Meier event-free rates. To improve flexibility, the user can specify a pre-built data set to transform into a forest plot if the automated analysis options of the macro do not fit the user's needs.
Jeffrey Meyers, Mayo Clinic
Qian Shi, Mayo Clinic
The SAS® Global Forum paper 'Best Practices for Configuring Your I/O Subsystem for SAS®9 Applications' provides general guidelines for configuring I/O subsystems for your SAS® applications. The paper reflects updated storage and virtualization technology. This companion paper ('Frequently Asked Questions Regarding Storage Configurations') is commensurately updated, including new storage technologies such as storage virtualization, storage tiers (including automated tier management), and flash storage. The subject matter is voluminous, so a frequently asked questions (FAQ) format is used. Our goal is to continually update this paper as additional field needs arise and technology dictates.
Tony Brown, SAS
Margaret Crevar, SAS
During grad school, students learn SAS® in class or on their own for a research project. Time is limited, so faculty have to focus on what they know are the fundamental skills that students need to successfully complete their coursework. However, real-world research projects are often multifaceted and require a variety of SAS skills. When students transition from grad school to a paying job, they might find that in order to be successful, they need more than the basic SAS skills that they learned in class. This paper highlights 10 insights that I've had over the past year during my transition from grad school to a paying SAS research job. I hope this paper will help other students make a successful transition. Top 10 insights: 1. You still get graded, but there is no syllabus. 2. There isn't time for perfection. 3. Learn to use your resources. 4. There is more than one solution to every problem. 5. Asking for help is not a weakness. 6. Working with a team is required. 7. There is more than one type of SAS®. 8. The skills you learned in school are just the basics. 9. Data is complicated and often frustrating. 10. You will continue to learn both personally and professionally.
Lauren Hall, Baylor Scott & White Health
Elisa Priest, Texas A&M University Health Science Center
Data quality is now more important than ever before. According to Gartner (2011), poor data quality is the primary reason why 40% of all business initiatives fail to achieve their targeted benefits. As a response, Deloitte Belgium has created an agile data quality framework using SAS® Data Management to rapidly identify and resolve root causes of data quality issues to jump-start business initiatives, especially a data migration one. Moreover, the approach uses both standard SAS Data Management functionalities (such as standardization, parsing, etc.) and advanced features (such as using macros, dynamic profiling in deployment mode, extracting the profiling results, etc.), allowing the framework to be agile and flexible and to maximize the reusability of specific components built in SAS Data Management.
yves wouters, Deloitte
Valérie Witters, Deloitte
Data access collisions occur when two or more processes attempt to gain concurrent access to a single data set. Collisions are a common obstacle to SAS® practitioners in multi-user environments. As SAS instances expand to infrastructures and ultimately empires, the inherent increased complexities must be matched with commensurately higher code quality standards. Moreover, permanent data sets will attract increasingly more devoted users and automated processes clamoring for attention. As these dependencies increase, so too does the likelihood of access collisions that, if unchecked or unmitigated, lead to certain process failure. The SAS/SHARE® module offers concurrent file access capabilities, but causes a (sometimes dramatic) reduction in processing speed, must be licensed and purchased separately from Base SAS®, and is not a viable solution for many organizations. Previously proposed solutions in Base SAS use a busy-wait spinlock cycle to repeatedly attempt file access until process success or timeout. While effective, these solutions are inefficient because they generate only read-write locked data sets that unnecessarily prohibit access by subsequent read-only requests. This presentation introduces the %LOCKITDOWN macro that advances previous solutions by affording both read-write and read-only lock testing and deployment. Moreover, recognizing the responsibility for automated data processes to be reliable, robust, and fault tolerant, %LOCKITDOWN is demonstrated in the context of a macro-based exception handling paradigm.
Troy Hughes, Datmesis Analytics
Quality measurement is increasingly important in the health-care sphere for both performance optimization and reimbursement. Treatment of chronic conditions is a key area of quality measurement. However, medication compendiums change frequently, and health-care providers often free text medications into a patient's record. Manually reviewing a complete medications database is time consuming. In order to build a robust medications list, we matched a pharmacist-generated list of categorized medications to a raw medications database that contained names, name-dose combinations, and misspellings. The matching procedure we used is called PROC COMPGED. We were able to combine a truncation function and an upcase function to optimize the output of PROC COMPGED. Using these combinations and manipulating the scoring metric of PROC COMPGED enabled us to narrow the database list to medications that were relevant to our categories. This process transformed a tedious task for PROC COMPARE or an Excel macro into a quick and efficient method of matching. The task of sorting through relevant matches was still conducted manually, but the time required to do so was significantly decreased by the fuzzy match in our application of PROC COMPGED.
Arti Virkud, NYC Department of Health
Effect sizes are strongly encouraged to be reported in addition to statistical significance and should be considered in evaluating the results of a study. The choice of an effect size for ANOVA models can be confusing because indices might differ depending on the research design as well as the magnitude of the effect. Olejnik and Algina (2003) proposed the generalized eta-squared and omega-squared effect sizes, which are comparable across a wide variety of research designs. This paper provides a SAS® macro for computing the generalized omega-squared effect size associated with analysis of variance models by using data from PROC GLM ODS tables. The paper provides the macro programming language, as well as results from an executed example of the macro.
Anh Kellermann, University of South Florida
Yi-hsin Chen, USF
Anh Kellermann, University of South Florida
Jeffrey Kromrey, University of South Florida
Thanh Pham, USF
Patrice Rasmussen, USF
Patricia Rodriguez de Gil, University of South Florida
Jeanine Romano, USF
Companies spend vast amounts of resources developing and enhancing proprietary software to clean their business data. Save time and obtain more accurate results by leveraging the SAS® Quality Knowledge Base (QKB), formerly a DataFlux® Data Quality technology. Tap into the existing QKB rules for cleansing contact information or product data, or easily design your own custom rules using the QKB editing tools. The QKB enables data management operations such as parsing, standardization, and fuzzy matching for contact information such as names, organizations, addresses, and phone numbers, or for product data attributes such as materials, colors, and dimensions. The QKB supports data in native character sets in over 38 locales. A single QKB can be shared by multiple SAS® Data Management installations across your enterprise, ensuring consistent results on workstations, servers, and massive parallel processing systems such as Hadoop. In this breakout, a SAS R&D manager demonstrates the power and flexibility of the QKB, and answers your questions about how to deploy and customize the QKB for your environment.
Brian Rineer, SAS
While there has been tremendous progress in technologies related to data storage, high-performance computing, and advanced analytic techniques, organizations have only recently begun to comprehend the importance of parallel strategies that help manage the cacophony of concerns around access, quality, provenance, data sharing, and use. While data governance is not new, the drumbeat around it, along with master data management and data quality, is approaching a crescendo. Intensified by the increase in consumption of information, expectations about ubiquitous access, and highly dynamic visualizations, these factors are also circumscribed by security and regulatory constraints. In this paper, we provide a summary of what data governance is and its importance. We go beyond the obvious and provide practical guidance on what it takes to build out a data governance capability appropriate to the scale, size, and purpose of the organization and its culture. Moreover, we discuss best practices in the form of requirements that highlight what we think is important to consider as you provide that tactical linkage between people, policies, and processes to the actual data lifecycle. To that end, our focus includes the organization and its culture, people, processes, policies, and technology. Further, we include discussions of organizational models as well as the role of the data steward, and provide guidance on how to formalize data governance into a sustainable set of practices within your organization.
Greg Nelson, ThotWave
Lisa Dodson, SAS
A SAS® Grid Manager environment provides your organization with a powerful and flexible way to manage many forms of SAS® computing workloads. For the business and IT user community, the benefits can range from data management jobs effectively utilizing the available processing resources, complex analyses being run in parallel, and reassurance that statutory reports are generated in a highly available environment. This workshop begins the process of familiarizing users with the core concepts of how to grid-enable tasks within SAS® Studio, SAS® Enterprise Guide®, SAS® Data Integration Studio, and SAS® Enterprise Miner™ client applications.
Edoardo Riva, SAS
This presentation provides a brief introduction to logistic regression analysis in SAS. Learn differences between Linear Regression and Logistic Regression, including ordinary least squares versus maximum likelihood estimation. Learn to: understand LOGISTIC procedure syntax, use continuous and categorical predictors, and interpret output from ODS Graphics.
Danny Modlin, SAS
For decades, mixed models been used by researchers to account for random sources of variation in regression-type models. Now they are gaining favor in business statistics to give better predictions for naturally occurring groups of data, such as sales reps, store locations, or regions. Learn about how predictions based on a mixed model differ from predictions in ordinary regression, and see examples of mixed models with business data.
Catherine Truxillo, SAS
Text data constitutes more than half of the unstructured data held in organizations. Buried within the narrative of customer inquiries, the pages of research reports, and the notes in servicing transactions are the details that describe concerns, ideas and opportunities. The historical manual effort needed to develop a training corpus is now no longer required, making it simpler to gain insight buried in unstructured text. With the ease of machine learning refined with the specificity of linguistic rules, SAS Contextual Analysis helps analysts identify and evaluate the meaning of the electronic written word. From a single, point-and-click GUI interface the process of developing text models is guided and visually intuitive. This presentation will walk through the text model development process with SAS Contextual Analysis. The results are in SAS format, ready for text-based insights to be used in any other SAS application.
George Fernandez, SAS
SAS/ETS provides many tools to improve the productivity of the analyst who works with time series data. This tutorial will take an analyst through the process of turning transaction-level data into a time series. The session will then cover some basic forecasting techniques that use past fluctuations to predict future events. We will then extend this modeling technique to include explanatory factors in the prediction equation.
Kenneth Sanford, SAS
Learning the Graph Template Language (GTL) might seem like a daunting task. However, creating customized graphics with SAS® is quite easy using many of the tools offered with Base SAS® software. The point-and-click interface of ODS Graphics Designer provides us with a tool that can be used to generate highly polished graphics and to store the GTL-based code that creates them. This opens the door for users who would like to create canned graphics that can be used on various data sources, variables, and variable types. In this hands-on training, we explore the use of ODS Graphics Designer to create sophisticated graphics and to save the template code. We then discuss modifications using basic SAS macros in order to create stored graphics code that is flexible enough to accommodate a wide variety of situations.
Rebecca Ottesen, City of Hope and Cal Poly SLO
Leanne Goldstein, City of Hope
Do you have a SAS® program that requires adding filenames to the input every time you run it? Aren't you tired of having to check for the files, check the names, and type them in? Check out how my SAS® Enterprise Guide® project checks for files, figures out the file names, and saves me from having to type in the file names for the input data files!
Nancy Wilson, Ally
When ODS Graphics was introduced a few years ago, it gave SAS users an array of new ways to generate graphs. One of those ways is with Statistical Graphics procedures. Now, with just a few lines of simple code, you can create a wide variety of high-quality graphs. This paper shows how to produce single-celled graphs using PROC SGPLOT and paneled graphs using PROC SGPANEL. This paper also shows how to send your graphs to different ODS destinations, how to apply ODS styles to your graphs, and how to specify properties of graphs, such as format, name, height, and width.
Susan Slaughter, Avocet Solutions
Axis tables, polygon plot, text plot, and more features have been added to Statistical Graphics (SG) procedures and Graph Template Language (GTL) for SAS® 9.4. These additions are a direct result of your feedback and are designed to make creating graphs easier. Axis tables let you add multiple tables of data to your graphs and to correctly align with the axis values with the right colors for group values in your data. Text plots can have rotated and aligned text anywhere in the graph. You can overlay jittered markers on box plots, use images and font glyphs as markers, specify group attributes without making style changes, and create entirely new custom graphs using the polygon plot. All this without using the annotation facility, which is now supported both for SG procedures and GTL. This paper guides you through these exciting new features now available in SG procedures and GTL.
Sanjay Matange, SAS
At the University of North Carolina at Chapel Hill, we had the pleasure of rolling out a strong enterprise-wide SAS® Visual Analytics environment in 10 months, with strong support from SAS. We encountered many bumps in the road, moments of both mountain highs and worrisome lows, as we learned what we could and could not do, and new ways to accomplish our goals. Our journey started in December of 2013 when a decision was made to try SAS Visual Analytics for all reporting, and incorporate other solutions only if and when we hit an insurmountable obstacle. We are still strongly using SAS Visual Analytics and are augmenting the tools with additional products. Along the way, we learned a number of things about the SAS Visual Analytics environment that are gems, whether one is relatively new to SAS® or an old hand. Measuring what is happening is paramount to knowing what constraints exist in the system before trying to enhance performance. Targeted improvements help if measurements can be made before and after each alteration. There are a few architectural alterations that can help in general, but we have seen that measuring is the guaranteed way to know what the problems are and whether the cures were effective.
Jonathan Pletzke, UNC Chapel Hill
Business managers are seeing the value of incorporating business information and analytics in daily decision-making with real-time information, when and where it is needed during business meetings and customer engagements. Real-time access of customer and business information reduces the latency in decision-making with confidence and accuracy, increasing the overall efficiency of the company. SAS is introducing new product options with HTML5 and adding advanced features in SAS® Mobile BI in SAS® Visual Analytics 7.2 to enhance the reach and experience of business managers to SAS® analytics and dashboards from SAS Visual Analytics. With SAS Mobile BI 7.2, SAS will push the limits of a business user's ability to author and change the content of dashboards and reports on mobile devices. This presentation focuses on both the new HTML5-based product options and the new advancements made with SAS Mobile BI that empower business users. We present in detail the scope and new features that are offered with the HTML5-based viewer and with SAS Mobile BI from SAS Visual Analytics. Since the new HTML5-based viewer and SAS Mobile BI are the viewer options for business users to visualize and consume the content from SAS Visual Analytics, this presentation demonstrates the two products in detail. Key product capabilities are demoed.
Murali Nori, SAS
As a SAS® Intelligence Platform Administrator, have your eyes ever glazed over as you performed repetitive tasks in SAS® Management Console or some other administrative user interface? Perhaps you're setting up metadata for a new department, managing a set of backups, or promoting content between dev, test, and prod environments. Did you know there is a large library of batch utilities to help you automate many of these common administration tasks? This paper explores content reporting and management utilities, such as viewing authorizations or relationships between content, as well as administrative tasks such as analyzing, creating, or deleting metadata repositories or performing a backup of the system. The batch utilities can be incorporated into scripts so that you can run them repeatedly on either an ad hoc or scheduled basis. Give your mouse a rest and save yourself some time.
Eric Bourn, SAS
Amy Peters, SAS
Bryan Wolfe, SAS
The SAS® Macro Language is a powerful tool for extending the capabilities of the SAS® System. This hands-on workshop teaches essential macro coding concepts, techniques, tips, and tricks to help beginning users learn the basics of how the macro language works. Using a collection of proven macro language coding techniques, attendees learn how to write and process macro statements and parameters; replace text strings with macro (symbolic) variables; generate SAS code using macro techniques; manipulate macro variable values with macro functions; create and use global and local macro variables; construct simple arithmetic and logical expressions; interface the macro language with the SQL procedure; store and reuse macros; troubleshoot and debug macros; and develop efficient and portable macro language code.
Kirk Paul Lafler, Software Intelligence Corporation
A group tasked with testing SAS® software from the customer perspective has gathered a number of helpful hints for SAS® 9.4 that will smooth the transition to its new features and products. These hints will help with the 'huh?' moments that crop up when you are getting oriented and will provide short, straightforward answers. We also share insights about changes in your order contents. Gleaned from extensive multi-tier deployments, SAS® Customer Experience Testing shares insiders' practical tips to ensure that you are ready to begin your transition to SAS 9.4. The target audience for this paper is primarily system administrators who will be installing, configuring, or administering the SAS 9.4 environment. (This paper is an updated version of the paper presented at SAS Global Forum 2014 and includes new features and software changes since the original paper was delivered, plus any relevant content that still applies. This paper includes information specific to SAS 9.4 and SAS 9.4 maintenance releases.)
Cindy Taylor, SAS
SAS® users are already familiar with the FCMP procedure and the flexibility it provides them in writing their own functions and subroutines. However, did you know that FCMP also allows you to call functions written in C? Did you know that you can create and populate complex C structures and use C types in FCMP? With the PROTO procedure, you can define function prototypes, structures, enumeration types, and even small bits of C code. This paper gets you started on how to use the PROTO procedure and, in turn, how to call your C functions from within FCMP and SAS.
Andrew Henrick, SAS
Karen Croft, SAS
Donald Erdman, SAS
In this session, we discuss the advantages of SAS® Federation Server and how it makes it easier for business users to access secure data for reports and use analytics to drive accurate decisions. This frees up IT staff to focus on other tasks by giving them a simple method of sharing data using a centralized, governed, security layer. SAS Federation Server is a data server that provides scalable, threaded, multi-user, and standards-based data access technology in order to process and seamlessly integrate data from multiple data repositories. The server acts as a hub that provides clients with data by accessing, managing, and sharing data from multiple relational and non-relational data sources as well as from SAS® data. Users can view data in big data sources like Hadoop, SAP HANA, Netezza, or Teradata, and blend them with existing database systems like Oracle or DB2. Security and governance features, such as data masking, ensure that the right users have access to the data and reduce the risk of exposure. Finally, data services are exposed via a REST API for simpler access to data from third-party applications.
Ivor Moan, SAS
No matter what type of programming you do in a pharmaceutical environment, there will eventually be a need to combine your data with a lookup table. This lookup table could be a code list for adverse events, a list of names for visits, one of your own summary data sets containing totals that you will be using to calculate percentages, or you might have your favorite way to incorporate it. This paper describes and discusses the reasons for using five different simple ways to merge data sets with lookup tables, so that when you take over the maintenance of a new program, you will be ready for anything!
Philip Holland, Holland Numerics Limited
This study looks at several ways to investigate latent variables in longitudinal surveys and their use in regression models. Three different analyses for latent variable discovery are briefly reviewed and explored. The procedures explored in this paper are PROC LCA, PROC LTA, PROC CATMOD, PROC FACTOR, PROC TRAJ, and PROC SURVEYLOGISTIC. The analyses defined through these procedures are latent profile analyses, latent class analyses, and latent transition analyses. The latent variables are included in three separate regression models. The effect of the latent variables on the fit and use of the regression model compared to a similar model using observed data is briefly reviewed. The data used for this study was obtained via the National Longitudinal Study of Adolescent Health, a study distributed and collected by Add Health. Data was analyzed using SAS® 9.3. This paper is intended for any level of SAS® user. This paper is also aimed at an audience with a background in behavioral science or statistics.
Deanna Schreiber-Gregory, National University
SAS® blogs (hosted at http://blogs.sas.com/content) attract millions of page views annually. With hundreds of authors, thousands of posts, and constant chatter within the blog comments, it's impossible for one person to keep track of all of the activity. In this paper, you learn how SAS technology is used to gather data and report on SAS blogs from the inside out. The beneficiaries include personnel from all over the company, including marketing, technical support, customer loyalty, and executives. The author describes the business case for tracking and reporting on the activity of blogging. You learn how SAS tools are used to access the WordPress database and how to create a 'blog data mart' for reporting and analytics. The paper includes specific examples of the insight that you can gain from examining the blogs analytically, and which techniques are most useful for achieving that insight. For example, the blog transactional data are combined with social media metrics (also gathered by using SAS) to show which blog entries and authors yield the most engagement on Twitter, Facebook, and LinkedIn. In another example, we identified the growing trend of 'blog comment spam' on the SAS blog properties and measured its cost to the business. These metrics helped to justify the investment in a solution. Many of the tools used are part of SAS® Foundation, including SAS/ACCESS®, the DATA step and SQL, PROC REPORT, PROC SGPLOT, and more. The results are shared in static reports, automated daily email summaries, dynamic reports hosted in SAS/IntrNet®, and even a corporate dashboard hosted in SAS® Visual Analytics.
Chris Hemedinger, SAS
With the constant need to inform researchers about neighborhood health data, the Santa Clara County Health Department created socio-demographic and health profiles for 109 neighborhoods in the county. Data was pulled from many public and county data sets, compiled, analyzed, and automated using SAS®. With over 60 indicators and 109 profiles, an efficient set of macros was used to automate the calculation of percentages, rates, and mean statistics for all of the indicators. Macros were also used to automate individual census tracts into pre-decided neighborhoods to avoid data entry errors. Simple SQL procedures were used to calculate and format percentages within the macros, and output was pushed out using Output Delivery System (ODS) Graphics. This output was exported to Microsoft Excel, which was used to create a sortable database for end users to compare cities and/or neighborhoods. Finally, the automated SAS output was used to map the demographic data using geographic information system (GIS) software at three geographies: city, neighborhood, and census tract. This presentation describes the use of simple macros and SAS procedures to reduce resources and time spent on checking data for quality assurance purposes. It also highlights the simple use of ODS Graphics to export data to an Excel file, which was used to mail merge the data into 109 unique profiles. The presentation is aimed at intermediate SAS users at local and state health departments who might be interested in finding an efficient way to run and present health statistics given limited staff and resources.
Roshni Shah, Santa Clara County
Storage space on a UNIX platform is a costly--and finite--resource to maintain, even under ideal conditions. By regularly monitoring and promptly responding to space limitations that might occur during production, an organization can mitigate the risk of wasted expense, time and effort caused by this problem. SAS® programmers at Truven Health Analytics have designed a reporting tool to measure space usage by a number of distinct factors over time. Using tabular and graphical output, the tool provides a full picture of what often contributes to critical reductions of available hardware space. It enables managers and users to respond appropriately and effectively whenever this occurs. It also helps to identify ways to encourage more efficient practices, thereby minimizing the likelihood of this occurring in the future. Operating System: RHEL 5.4 (Red Hat Enterprise Linux), Oracle Sun Fire X4600 M2 SAS® 9.3 TS1M1.
Matthew Shevrin, Truven Health Analytcis
Your electricity usage patterns reveal a lot about your family and routines. Information collected from electrical smart meters can be mined to identify patterns of behavior that can in turn be used to help change customer behavior for the purpose of altering system load profiles. Demand Response (DR) programs represent an effective way to cope with rising energy needs and increasing electricity costs. The Federal Energy Regulatory Commission (FERC) defines demand response as changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to lower electricity use at times of high wholesale market prices or when system reliability of jeopardized. In order to effectively motivate customers to voluntarily change their consumptions patterns, it is important to identify customers whose load profiles are similar so that targeted incentives can be directed toward these customers. Hence, it is critical to use tools that can accurately cluster similar time series patterns while providing a means to profile these clusters. In order to solve this problem, though, hardware and software that is capable of storing, extracting, transforming, loading and analyzing large amounts of data must first be in place. Utilities receive customer data from smart meters, which track and store customer energy usage. The data collected is sent to the energy companies every fifteen minutes or hourly. With millions of meters deployed, this quantity of information creates a data deluge for utilities, because each customer generates about three thousand data points monthly, and more than thirty-six billion reads are collected annually for a million customers. The data scientist is the hunter, and DR candidate patterns are the prey in this cat-and-mouse game of finding customers willing to curtail electrical usage for a program benefit. The data scientist must connect large siloed data sources, external data
, and even unstructured data to detect common customer electrical usage patterns, build dependency models, and score them against their customer population. Taking advantage of Hadoop's ability to store and process data on commodity hardware with distributed parallel processing is a game changer. With Hadoop, no data set is too large, and SAS® Visual Statistics leverages machine learning, artificial intelligence, and clustering techniques to build descriptive and predictive models. All data can be usable from disparate systems, including structured, unstructured, and log files. The data scientist can use Hadoop to ingest all available data at rest, and analyze customer usage patterns, system electrical flow data, and external data such as weather. This paper will use Cloudera Hadoop with Apache Hive queries for analysis on platforms such as SAS® Visual Analytics and SAS Visual Statistics. The paper will showcase optionality within Hadoop for querying large data sets with open-source tools and importing these data into SAS® for robust customer analytics, clustering customers by usage profiles, propensity to respond to a demand response event, and an electrical system analysis for Demand Response events.
Kathy Ball, SAS
The first task to accomplish our SAS® 9.4 installation goal is to create an Amazon Web Services (AWS) secured EC2 (Elastic Compute Cloud 2) instance called a Virtual Private Cloud (VPC). Through a series of wizard-driven dialog boxes, the SAS administrator selects virtual CPUs (vCPUs, which have about a 2:1 ratio to cores ), memory, storage, and network performance considerations via regional availability zones. Then, there is a prompt to create a VPC that will be housed within the EC2 instance, along with a major component called subnets. A step to create a security group is next, which enables the SAS administrator to specify all of the VPC firewall port rules required for the SAS 9.4 application. Next, the EC2 instance is reviewed and a security key pair is either selected or created. Then the EC2 launches. At this point, Internet connectivity to the EC2 instance is granted by attaching an Internet gateway and its route table to the VPC and allocating and associating an elastic IP address along with a public DNS. The second major task involves establishing connectivity to the EC2 instance and a method of download for SAS software. In the case of the Linux Red Hat instance created here, putty is configured to use the EC2's security key pair (.ppk file). In order to transfer files securely to the EC2 instance, a tool such as WinSCP is installed and uses the putty connection for secure FTP. The Linux OS is then updated, and then VNCServer is installed and configured so that the SAS administrator can use a GUI. Finally, a Firefox web browser is installed to download the SAS® Download Manager. After downloading the SAS Download Manager, a SAS depot directory is created on the Linux file system and the SAS Download Manager is run once we have provided the software order number and SAS installation key. Once the SAS software depot has been loaded, we can verify the success of the SAS software depot's download by running the SAS depot checker. The next pre-installatio
n task is to take care of some Linux OS housekeeping. Local users (for example, the SAS installation ID), sas, and other IDs such as sassrv, lsfadmin, lsfuser, and sasdemo are created. Specific directory permissions are set for the installer ID sas. The ulimit setting for open files and max user processes are increased and directories are created for a SAS installation home and configuration directory. Some third-party tools such as python, which are required for SAS 9.4, are installed. Then Korn shell and other required Linux packages are installed. Finally, the SAS Deployment Manager installation wizard is launched and the multiple dialog boxes are filled out, with many defaults accepted and Next clicked. SAS administrators should consider running the SAS Deployment Manager twice, first to solely install the SAS software, and then later to configure. Finally, after SAS Deployment Manager completion, SAS post-installation tasks are completed.
Jeff Lehmann, Slalom Consulting
Today's SAS® environment has large numbers of concurrent SAS processes and ever-growing data volumes. To help SAS users remain productive, SAS administrators must ensure that SAS applications have sufficient computer resources, properly configured and monitored often. Understanding how all the components of SAS work and how they will be used by your users is the first step. The guidance offered in this paper will help SAS administrators evaluate hardware, operating system, and infrastructure options for a SAS environment that will keep their SAS applications running at optimal performance and their user community happy.
Margaret Crevar, SAS
How do you engage your report viewer on an emotional and intellectual level and tell the story of your data? You create a perfect graphic to tell that story using SAS® Visual Analytics Graph Builder. This paper takes you on a journey by combining and manipulating graphs to refine your data's best possible story. This paper shows how layering visualizations can create powerful and insightful viewpoints on your data. You will see how to create multiple overlay graphs, single graphs with custom options, data-driven lattice graphs, and user-defined lattice graphs to vastly enhance the story-telling power of your reports and dashboards. Some examples of custom graphs covered in this paper are: resource timelines combined with scatter plots and bubble plots to enhance project reporting, butterfly charts combined with bubble plots to provide a new way to show demographic data, and bubble change plots to highlight the journey your data has traveled. This paper will stretch your imagination and showcase the art of the possible and will take your dashboard from mediocre to miraculous. You will definitely want to share your creative graph templates with your colleagues in the global SAS® community.
Travis Murphy, SAS
The telecommunications industry is the fastest changing business ecosystem in this century. Therefore, handset campaigning to increase loyalty is the top issue for telco companies. However, these handset campaigns have great fraud and payment risks if the companies do not have the ability to classify and assess customers properly according to their risk propensity. For many years, telco companies managed the risk with business rules such as customer tenure until the launch of analytics solutions into the market. But few business rules restrict telco companies in the sales of handsets to new customers. On the other hand, with increasing competition pressure in telco companies, it is necessary to use external credit data to sell handsets to new customers. Credit bureau data was a good opportunity to measure and understand the behaviors of the applicants. But using external data required system integration and real-time decision systems. For those reasons, we need a solution that enables us to predict risky customers and then integrate risk scores and all information into one real-time decision engine for optimized handset application vetting. After an assessment period, SAS® Analytics platform and RTDM were chosen as the most suitable solution because they provide a flexible user friendly interface, high integration, and fast deployment capability. In this project, we build a process that includes three main stages to transform the data into knowledge. These stages are data collection, predictive modelling, and deployment and decision optimization. a) Data Collection: We designed a specific daily updated data mart that connects internal payment behavior, demographics, and customer experience data with external credit bureau data. In this way, we can turn data into meaningful knowledge for better understanding of customer behavior. b) Predictive Modelling: For using the company potential, it is critically important to use an analytics approach that is based on state-of-the-art tec
hnologies. We built nine models to predict customer propensity to pay. As a result of better classification of customers, we obtain satisfied results in designing collection scenarios and decision model in handset application vetting. c) Deployment and Decision Optimization: Knowledge is not enough to reach success in business. It should be turned into optimized decision and deployed real time. For this reason, we have been using SAS® Predictive Analytics Tools and SAS® Real-Time Decision Manager to primarily turn data into knowledge and turn knowledge into strategy and execution. With this system, we are now able to assess customers properly and to sell handset even to our brand-new customers as part of the application vetting process. As a result of this, while we are decreasing nonpayment risk, we generated extra revenue that is coming from brand-new contracted customers. In three months, 13% of all handset sales was concluded via RTDM. Another benefit of the RTDM is a 30% cost saving in external data inquiries. Thanks to the RTDM, Avea has become the first telecom operator that uses bureau data in Turkish Telco industry.
Hurcan Coskun, Avea
The DATA step has served SAS® programmers well over the years, and although it is powerful, it has not fundamentally changed. With DS2, SAS introduced a significant alternative to the DATA step by providing an object-oriented programming environment. In this paper, we share our experiences with getting started with DS2 and learning to use it to access, manage, and share data in a scalable, threaded, and standards-based way.
Peter Eberhardt, Fernwood Consulting Group Inc.
Xue Yao, Winnipeg Regional Health Aurthority
SAS® University Edition is a great addition to the world of freely available analytic software, and this 'how-to' presentation shows you how to implement a discrete event simulation using Base SAS® to model future US Veterans population distributions. Features include generating a slideshow using ODS output to PowerPoint.
Michael Grierson
SAS® Grid Computing promises many benefits that the SAS® community has been demanding for years, including workload management of SAS applications, a highly available infrastructure, higher resource utilization, flexibility for IT infrastructure, and potentially improved performance of SAS applications. But to implement these benefits, you need to have a good definition of what you need and an understanding of what is involved in enabling the SAS tasks to take advantage of all the SAS grid nodes. In addition to haivng this understanding of SAS, the underlying hardware infrastructure (cores to storage) must be configured and tuned correctly. This paper discusses the most important things (or misunderstandings) that SAS customers need to know before they deploy SAS® Grid Manager.
Doug Haigh, SAS
Glenn Horton, SAS
Missing data is an unfortunate reality of statistics. However, there are various ways to estimate and deal with missing data. This paper explores the pros and cons of traditional imputation methods versus maximum likelihood estimation as well as singular versus multiple imputation. These differences are displayed through comparing parameter estimates of a known data set and simulating random missing data of different severity. In addition, this paper uses PROC MI and PROC MIANALYZE and shows how to use these procedures in a longitudinal data set.
Christopher Yim, Cal Poly San Luis Obispo
Since the financial crisis of 2008, banks and bank holding companies in the United States have faced increased regulation. One of the recent changes to these regulations is known as the Comprehensive Capital Analysis and Review (CCAR). At the core of these new regulations, specifically under the Dodd-Frank Wall Street Reform and Consumer Protection Act and the stress tests it mandates, are a series of what-if or scenario analyses requirements that involve a number of scenarios provided by the Federal Reserve. This paper proposes frequentist and Bayesian time series methods that solve this stress testing problem using a highly practical top-down approach. The paper focuses on the value of using univariate time series methods, as well as the methodology behind these models.
Kenneth Sanford, SAS
Christian Macaro, SAS
Why did my merge fail? How did that variable get truncated? Why am I getting unexpected results? Understanding how the DATA step actually works is the key to answering these and many other questions. In this paper, two independent consultants with a combined three decades of SAS® programming experience share a treasure trove of knowledge aimed at helping the novice SAS programmer take his or her game to the next level by peering behind the scenes of the DATA step. We touch on a variety of topics, including compilation versus execution, the program data vector, and proper merging techniques, with a focus on good programming practices that help the programmer steer clear of common pitfalls.
Joshua Horstman, Nested Loop Consulting
Britney Gilbert, Juniper Tree Consulting
Joshua Horstman, Nested Loop Consulting
In SAS® software development, data specifications and process requirements can be built into user-defined control data set functioning as components of ETL routines. A control data set provides comprehensive definition on the data source, relationship, logic, description, and metadata of each data element. This approach facilitates auto-generated SAS codes during program execution to perform data ingestion, transformation, and loading procedures based on rules defined in the table. This paper demonstrates the application of using a control data set for the following: (1) data table initialization and integration; (2) validation and quality control; (3) element transformation and creation; (4) data loading; and (5) documentation. SAS programmers and business analysts would find programming development and maintenance of business rules more efficient with this standardized method.
Edmond Cheng, CACI International Inc
Microsoft SharePoint has been adopted by a number of companies today as their content management tool because of its ability to create and manage documents, records, and web content. It is described as an enterprise collaboration platform with a variety of capabilities, and thus it stands to reason that this platform should also be used to surface content from analytical applications such as SAS® and the R language. SAS provides various methods for surfacing SAS content through SharePoint. This paper describes one such methodology that is both simple and elegant, requiring only SAS Foundation. It also explains how SAS and R can be used together to form a robust solution for delivering analytical results. The paper outlines the approach for integrating both languages into a single security model that uses Microsoft Active Directory as the primary authentication mechanism for SharePoint. It also describes how to extend the authorization to SAS running on a Linux server where LDAP is used. Users of this system are blissfully ignorant of the back-end technology components, as we offer up a seamless interface where they simply authenticate to the SharePoint site and the rest is, as they say, magic.
Piyush SIngh, TATA consultancy services limited
Prasoon Sangwan, TATA CONSULTANCY SERVICES
Shiv Govind Yadav
Generalized linear models are highly useful statistical tools in a broad array of business applications and scientific fields. How can you select a good model when numerous models that have different regression effects are possible? The HPGENSELECT procedure, which was introduced in SAS/STAT® 12.3, provides forward, backward, and stepwise model selection for generalized linear models. In SAS/STAT 14.1, the HPGENSELECT procedure also provides the LASSO method for model selection. You can specify common distributions in the family of generalized linear models, such as the Poisson, binomial, and multinomial distributions. You can also specify the Tweedie distribution, which is important in ratemaking by the insurance industry and in scientific applications. You can run the HPGENSELECT procedure in single-machine mode on the server where SAS/STAT is installed. With a separate license for SAS® High-Performance Statistics, you can also run the procedure in distributed mode on a cluster of machines that distribute the data and the computations. This paper shows you how to use the HPGENSELECT procedure both for model selection and for fitting a single model. The paper also explains the differences between the HPGENSELECT procedure and the GENMOD procedure.
Gordon Johnston, SAS
Bob Rodriguez, SAS
This presentation teaches the audience how to use ODS Graphics. Now part of Base SAS®, ODS Graphics are a great way to easily create clear graphics that enable any user to tell their story well. SGPLOT and SGPANEL are two of the procedures that can be used to produce powerful graphics that used to require a lot of work. The core of the procedures is explained, as well as some of the many options available. Furthermore, we explore the ways to combine the individual statements to make more complex graphics that tell the story better. Any user of Base SAS on any platform will find great value in the SAS ODS Graphics procedures.
Chuck Kincaid, Experis
Organizations are loading data into Hadoop platforms at an extraordinary rate. However, in order to extract value from these platforms, the data must be prepared for analytic exploit. As the volume of data grows, it becomes increasingly more important to reduce data movement, as well as to leverage the computing power of these distributed systems. This paper provides a cursory overview of SAS® Data Loader, a product specifically aimed at these challenges. We cover the underlying mechanisms of how SAS Data Loader works, as well as how it's used to profile, cleanse, transform, and ultimately prepare data for analytics in Hadoop.
Keith Renison, SAS
The SAS® hash object is an incredibly powerful technique for integrating data from two or more data sets based on a common key. This session describes the basic methodology for defining, populating, and using a hash object to perform lookups within the DATA step and provides examples of situations in which the performance of SAS programs is improved by their use. Common problems encountered when using hash objects are explained, and tools and techniques for optimizing hash objects within your SAS program are demonstrated.
Chris Schacherer, Clinical Data Management Systems, LLC
This paper introduces Jeffreys interval for one-sample proportion using SAS® software. It compares the credible interval from a Bayesian approach with the confidence interval from a frequentist approach. Different ways to calculate the Jeffreys interval are presented using PROC FREQ, the QUANTILE function, a SAS program of the random walk Metropolis sampler, and PROC MCMC.
Wu Gong, The Children's Hospital of Philadelphia
Examples include: how to join when your data is perfect, how to join when your data does not match and you have to manipulate it in order to join, how to create a subquery, how to use a subquery.
Anita Measey, Bank of Montreal, Risk Capital & Stress Testing
The cyclical coordinate descent method is a simple algorithm that has been used for fitting generalized linear models with lasso penalties by Friedman et al. (2007). The coordinate descent algorithm can be implemented in Base SAS® to perform efficient variable selection and shrinkage for GLMs with the L1 penalty (the lasso).
Robert Feyerharm, Beacon Health Options
SAS® Visual Analytics opens up a world of intuitive interactions, providing report creators the ability to develop more efficient ways to deliver information. Business-related hierarchies can be defined dynamically in SAS Visual Analytics to group data more efficiently--no more going back to the developers. Visualizations can interact with each other, with other objects within other sections, and even with custom applications and SAS® stored processes. This paper provides a blueprint to streamline and consolidate reporting efforts using these interactions available in SAS Visual Analytics. The goal of this methodology is to guide users down information pathways that can progressively subset data into smaller, more understandable chunks of data, while summarizing each layer to provide insight along the way. Ultimately the final destination of the information pathway holds a reasonable subset of data so that a user can take action and facilitate an understood outcome.
Stephen Overton, Zencos Consulting
SAS® customers benefit greatly when they are using the functionality, performance, and stability available in the latest version of SAS. However, the task of moving all SAS collateral such as programs, data, catalogs, metadata (stored processes, maps, queries, reports, and so on), and content to SAS® 9.4 can seem daunting. This paper provides an overview of the steps required to move all SAS collateral from systems based on SAS® 9.2 and SAS® 9.3 to the current release of SAS® 9.4.
Alec Fernandez, SAS
From stock price histories to hospital stay records, analysis of time series data often requires the use of lagged (and occasionally lead) values of one or more analysis variables. For the SAS® user, the central operational task is typically getting lagged (lead) values for each time point in the data set. Although SAS has long provided a LAG function, it has no analogous lead function--an especially significant problem in the case of large data series. This paper reviews the LAG function (in particular, the powerful but non-intuitive implications of its queue-oriented basis), demonstrates efficient ways to generate leads with the same flexibility as the LAG function (but without the common and expensive recourse of data re-sorting), and shows how to dynamically generate leads and lags through the use of the hash object.
Mark Keintz, Wharton Research Data Services
Across the languages of SAS® are many golden nuggets--functions, formats, and programming features just waiting to impress your friends and colleagues. Learning SAS over 30+ years, I have collected a few, and I offer them to you in this presentation.
Peter Crawford, Crawford Software Consultancy Limited
Although today's marketing teams enjoy large-scale campaign relationship management systems, many are still left with the task of bridging the well-known gap between campaigns and customer purchasing decisions. During this session, we discuss how Slalom Consulting and Celebrity Cruises decided to take a bold step and bridge that gap. We show how marketing efforts are distorted when a team considers only the last campaign sent to a customer that later booked a cruise. Then we lay out a custom-built SAS 9.3 solution that scales to process thousands of campaigns per month using a stochastic attribution technique. This approach considers all of the campaigns that touch the customer, assigning a single campaign or a set of campaigns that contributed to their decision.
Christopher Byrd, Slalom Consulting
In-database processing refers to the integration of advanced analytics into the data warehouse. With this capability, analytic processing is optimized to run where the data reside, in parallel, without having to copy or move the data for analysis. From a data governance perspective there are many good reasons to embrace in-database processing. Many analytical computing solutions and large databases use this technology because it provides significant performance improvements over more traditional methods. Come learn how Blue Cross Blue Shield of Tennessee (BCBST) uses in-database processing from SAS and Teradata.
Harold Klagstad, BlueCross BlueShield of TN
SAS® Environment Manager helps SAS® administrators and system administrators manage SAS resources and effectively monitor the environment. SAS Environment Manager provides administrators with a centralized location for accessing and monitoring the SAS® Customer Intelligence environment. This enables administrators to identify problem areas and to maintain an in-depth understanding of the day-to-day activities on the system. It is also an excellent way to predict the usage and growth of the environment for scalability. With SAS Environment Manager, administrators can set up monitoring for CI logs (for example, SASCustIntelCore6.3.log, SASCustIntelStudio6.3.log) and other general logs from the SAS® Intelligence Platform. This paper contains examples for administrators who support SAS Customer Intelligence to set up this type of monitoring. It provides recommendations for approaches and for how to interpret the results from SAS Environment Manager.
Daniel Alvarez, SAS
SAS® provides a number of tools for creating customized professional reports. While SAS provides point-and-click interfaces through products such as SAS® Web Report Studio, SAS® Visual Analytics or even SAS® Enterprise Guide®, unfortunately, many users do not have access to the high-end tools and require customization beyond the SAS Enterprise Guide point-and-click interface. Fortunately, base SAS procedures such as the REPORT procedure, combined with graphics procedures, macros, ODS, and Annotate can be used to create very customized professional reports. When toggling together different solutions such as SAS Statistical Graphics, the REPORT procedure, ODS, and SAS/GRAPH®, different techniques need to be used to keep the same look and feel throughout the report package. This presentation looks at solutions that can be used to keep a consistent look and feel in a report package created with different SAS products.
Barbara Okerson, Anthem
Automated decision-making systems are now found everywhere, from your bank to your government to your home. For example, when you inquire for a loan through a website, a complex decision process likely runs combinations of statistical models and business rules to make sure you are offered a set of options for tantalizing terms and conditions. To make that happen, analysts diligently strive to encode their complex business logic into these systems. But how do you know if you are making the best possible decisions? How do you know if your decisions conform to your business constraints? For example, you might want to maximize the number of loans that you provide while balancing the risk among different customer categories. Welcome to the world of optimization. SAS® Business Rules Manager and SAS/OR® software can be used together to manage and optimize decisions. This presentation demonstrates how to build business rules and then optimize the rule parameters to maximize the effectiveness of those rules. The end result is more confidence that you are delivering an effective decision-making process.
David Duling, SAS
SAS® 9.4 introduced extended attributes, which are name-value pairs that can be attached to either the data set or to individual variables. Extended attributes are managed through PROC DATASETS and can be viewed through PROC CONTENTS or through Dictionary.XATTRS. This paper describes the development of a SAS® Enterprise Guide® custom add-in that allows for the entry and editing of extended attributes, with the possibility of using a controlled vocabulary. The controlled vocabulary used in the initial application is derived from the lifecycle branch of the Data Documentation Initiative metadata standard (DDI-L).
Larry Hoyle, IPSR, Univ. of Kansas
File management is a tedious process that can be automated by using SAS® to create and execute a Windows command script. The macro in this paper copies files from one location to another, identifies obsolete files by the version number, and then moves them to an archive folder. Assuming that some basic conditions are met, this macro is intended to be easy to use and robust. Windows users who run routine programs for projects with rework might want to consider this solution.
Jason Wachsmuth, Public Policy Center at The University of Iowa
Qualtrics is an online survey tool that offers a variety of features useful to researchers. In this paper, we show you how to implement the different options available for distributing surveys and downloading survey responses. We use the FILENAME statement (URL access method) and process the API responses with SAS® XML Mapper. In addition, we show an approach for how to keep track of active and inactive respondents.
Faith Parsons, Columbia University Medical Center
Sean Mota, Columbia University Medical Center
Yan Quan, Columbia University
The SAS® Web Application Server is a lightweight server that provides enterprise-class features for running SAS® middle-tier web applications. This server can be configured to use the SAS® Web Infrastructure Platform Data Server for a transactional storage database. You can meet the high-availability data requirement in your business plan by implementing a SAS Web Infrastructure Data Server cluster. This paper focuses on how the SAS Web Infrastructure Data Server on the SAS middle tier can be configured for load balancing, and data replication involving multiple nodes. SAS® Environment Manager and pgpool-II are used to enable these high-availability strategies, monitor the server status, and initiate failover as needed.
Ken Young, SAS
This paper describes the main functions of the SAS® SG procedures and their relations. It also offers a way to create data-colored maps using these procedures. Here are the basics of the SG procedures. For a few years, the SG procedures (PROC SGPLOT, PROC SGSCATTER, PROC SGPANEL, and so on) have been part of Base SAS® and thus available for everybody. SG originated as Statistical Graphics , but nowadays the procedures are often referred to as SAS® ODS Graphics. With the syntax in a 1000+ page document, it is quite a challenge to start using them. Also, SAS® Enterprise Guide® currently has no graphics tasks that generate code for the SG procedures (except those in the statistical arena). For a long time SAS/GRAPH® has been the vehicle for producing presentation-ready graphs of your data. In particular, the SAS users that have experience with those SAS/GRAPH procedures will hesitate to change over. But the SG procedures continue to be enhanced with new features. And, because the appearance of many elements is governed by the ODS styles, they are very well suited to provide a consistent style across all your output, text and graphics. PROC SGPLOT - PROC SGPANEL - PROC SGSCATTER: The paper first describes the basic procedures that a user will start with; PROC SGPLOT is the first place. Then the more elaborate possibilities of PROC SGPANEL and PROC SGSCATTER are described. Both these procedures can create a matrix or panel of graphs. The different goals of these two procedures will be explained: comparing a group of variables versus comparing the levels of two variables. PROC SGPLOT can create many different graphs: histograms, time series, scatterplots, and so on. PROC SGPANEL has essentially the same possibilities. The nature of PROC SGSCATTER (and the name says it already) limits it to scatter-like graphs. But many statements and options are common to lots of types of graphs. This paper groups them logically
, making clear what the procedures have in common and where they differ. Related to the SG procedures are also two utilities (the Graphics Editor and the Graphics Designer), which are delivered as SAS® Foundation applications. The paper describes the relations between these utilities and the objects they produce, and the relevant SG procedures and related utilities. Creating a map for virtually all tasks that can be performed with the well known SAS/GRAPH procedures, the counterpart in the SG procedures is easily pointed out, often with more extensive features. This is not the case for the maps produced with PROC GMAP. This paper shows the mere few steps that are necessary to convert the data sets that contain your data and your map coordinates into data sets that enable you to use the power and features of PROC SGPLOT to create your map in any projection system and any coordinate window.
Frank Poppe, PW Consulting
There are few business environments more dynamic than that of a casino. Serving a multitude of entertainment options to thousands of patrons every day results in a lot of customer interaction points. All of these interactions occur in a highly competitive environment where, if a patron doesn't feel that he is getting the recognition that he deserves, he can easily walk across the street to a competitor. Add to this the expected amount of reinvestment per patron in the forms of free meals and free play. Making high-quality real-time decisions during each customer interaction is critical to the success of a casino. Such decisions need to be relevant to customers' needs and values, reflect the strategy of the business, and help maximize the organization's profitability. Being able to make those decisions repeatedly is what separates highly successful businesses from those that flounder or fail. Casinos have a great deal of information about a patron's history, behaviors, and preferences. Being able to react in real time to newly gathered information captured in ongoing dialogues opens up new opportunities about what offers should be extended and how patrons are treated. In this session, we provide an overview of real-time decisioning and its capabilities, review the various opportunities for real-time interaction in a casino environment, and explain how to incorporate the outputs of analytics processes into a real-time decision engine.
Natalie Osborn, SAS
It's well known that SAS® is the leader in advanced analytics but often overlooked is the intelligent data preparation that combines information from disparate sources to enable confident creation and deployment of compelling models. Improving data-based decision making is among the top reasons why organizations decide to embark on master data management (MDM) projects and why you should consider incorporating MDM functionality into your analytics-based processes. MDM is a discipline that includes the people, processes, and technologies for creating an authoritative view of core data elements in enterprise operational and analytic systems. This paper demonstrates why MDM functionality is a natural fit for many SAS solutions that need to have access to timely, clean, and unique master data. Because MDM shares many of the same technologies that power SAS analytic solutions, it has never been easier to add MDM capabilities to your advanced analytics projects.
Ron Agresta, SAS
Predictive analytics has been widely studied in recent years, and it has been applied to solve a wide range of real-world problems. Nevertheless, current state-of-the-art predictive analytics models are not well aligned with managers' requirements in that the models fail to include the real financial costs and benefits during the training and evaluation phases. Churn predictive modeling is one of those examples in which evaluating a model based on a traditional measure such as accuracy or predictive power does not yield the best results when measured by investment per subscriber in a loyalty campaign and the financial impact of failing to detect a real churner versus wrongly predicting a non-churner as a churner. In this paper, we propose a new financially based measure for evaluating the effectiveness of a voluntary churn campaign, taking into account the available portfolio of offers, their individual financial cost, and the probability of acceptance depending on the customer profile. Then, using a real-world churn data set, we compared different cost-insensitive and cost-sensitive predictive analytics models and measured their effectiveness based on their predictive power and cost optimization. The results show that using a cost-sensitive approach yields to an increase in profitability of up to 32.5%.
Alejandro Correa Bahnsen, University of Luxembourg
Darwin Amezquita, DIRECTV
Juan Camilo Arias, Smartics
The goal of this session is to describe the whole process of model creation from the business request through model specification, data preparation, iterative model creation, model tuning, implementation, and model servicing. Each mentioned phase consists of several steps in which we describe the main goal of the step, the expected outcome, the tools used, our own SAS codes, useful nodes, and settings in SAS® Enterprise Miner™, procedures in SAS® Enterprise Guide®, measurement criteria, and expected duration in man-days. For three steps, we also present deep insights with examples of practical usage, explanations of used codes, settings, and ways of exploring and interpreting the output. During the actual model creation process, we suggest using Microsoft Excel to keep all input metadata along with information about transformations performed in SAS Enterprise Miner. To get faster information about model results, we combine an automatic SAS® code generator implemented in Excel, and then we input this code to SAS Enterprise Guide and create a specific profile of results directly from the nodes output tables of SAS Enterprise Miner. This paper also focuses on an example of a binary model stability check-in time performed in SAS Enterprise Guide through measuring optimal cut-off percentage and lift. These measurements are visualized and automatized using our own codes. By using this methodology, users would have direct contact with transformed data along with the possibility to analyze and explore any semi-results. Furthermore, the proposed approach could be used for several types of modeling (for example, binary and nominal predictive models or segmentation models). Generally, we have summarized our best practices of combining specific procedures performed in SAS Enterprise Guide, SAS Enterprise Miner, and Microsoft Excel to create and interpret models faster and more effectively.
Peter Kertys, VÚB a.s.
Effect modification occurs when the association between a predictor of interest and the outcome is differential across levels of a third variable--the modifier. Effect modification is statistically tested as the interaction effect between the predictor and the modifier. In repeated measures studies (with more than two time points), higher-order (three-way) interactions must be considered to test effect modification by adding time to the interaction terms. Custom fitting and constructing these repeated measures models are difficult and time consuming, especially with respect to estimating post-fitting contrasts. With the advancement of the LSMESTIMATE statement in SAS®, a simplified approach can be used to custom test for higher-order interactions with post-fitting contrasts within a mixed model framework. This paper provides a simulated example with tips and techniques for using an application of the nonpositional syntax of the LSMESTIMATE statement to test effect modification in repeated measures studies. This approach, which is applicable to exploring modifiers in randomized controlled trials (RCTs), goes beyond the treatment effect on outcome to a more functional understanding of the factors that can enhance, reduce, or change this relationship. Using this technique, we can easily identify differential changes for specific subgroups of individuals or patients that subsequently impact treatment decision making. We provide examples of conventional approaches to higher-order interaction and post-fitting tests using the ESTIMATE statement and compare and contrast this to the nonpositional syntax of the LSMESTIMATE statement. The merits and limitations of this approach are discussed.
Pronabesh DasMahapatra, PatientsLikeMe Inc.
Ryan Black, NOVA Southeastern University
SAS® Forecast Server provides easy and automatic large-scale forecasting, which enables organizations to commit fewer resources to the process, reduce human touch interaction and minimize the biases that contaminate forecasts. SAS Forecast Server Client represents the modernization of the graphical user interface for SAS Forecast Server. This session will describe and demonstrate this new client, including new features, such as demand classification, and overall functionality.
Udo Sglavo, SAS
Organisations find SAS® upgrades and migration projects come with risk, costs, and challenges to solve. The benefits are enticing new software capabilities such as SAS® Visual Analytics, which help maintain your competitive advantage. An interesting conundrum. This paper explores how to evaluate the benefits and plan the project, as well as how the cloud option impacts modernisation. The author presents with the experience of leading numerous migration and modernisation projects from the leading UK SAS Implementation Partner.
David Shannon, Amadeus Software
This paper describes how we reduced elapsed time for the third maintenance release for SAS® 9.4 by as much as 22% by using the High Performance FICON for IBM System z (zHPF) facility to perform I/O for SAS® files on IBM mainframe systems. The paper details the performance improvements, internal testing to quantify improvements, and the customer actions needed to enable zHPF on their system. The benefits of zHPF are discussed within the larger context of other techniques that a customer can use to accelerate processing of SAS files.
Lewis King, SAS
Fred Forst
Multilevel models (MLMs) are frequently used in social and health sciences where data are typically hierarchical in nature. However, the commonly used hierarchical linear models (HLMs) are appropriate only when the outcome of interest is normally distributed. When you are dealing with outcomes that are not normally distributed (binary, categorical, ordinal), a transformation and an appropriate error distribution for the response variable needs to be incorporated into the model. Therefore, hierarchical generalized linear models (HGLMs) need to be used. This paper provides an introduction to specifying HGLMs using PROC GLIMMIX, following the structure of the primer for HLMs previously presented by Bell, Ene, Smiley, and Schoeneberger (2013). A brief introduction into the field of multilevel modeling and HGLMs with both dichotomous and polytomous outcomes is followed by a discussion of the model-building process and appropriate ways to assess the fit of these models. Next, the paper provides a discussion of PROC GLIMMIX statements and options as well as concrete examples of how PROC GLIMMIX can be used to estimate (a) two-level organizational models with a dichotomous outcome and (b) two-level organizational models with a polytomous outcome. These examples use data from High School and Beyond (HS&B), a nationally representative longitudinal study of American youth. For each example, narrative explanations accompany annotated examples of the GLIMMIX code and corresponding output.
Mihaela Ene, University of South Carolina
Bethany Bell, University of South Carolina
Genine Blue, University of South Carolina
Elizabeth Leighton, University of South Carolina
This presentation emphasizes use of SAS® 9.4 to perform multiple imputation of missing data using the PROC MI Fully Conditional Specification (FCS) method with subsequent analysis using PROC SURVEYLOGISTIC and PROC MIANALYZE. The data set used is based on a complex sample design. Therefore, the examples correctly incorporate the complex sample features and weights. The demonstration is then repeated in Stata, IVEware, and R for a comparison of major software applications that are capable of multiple imputation using FCS or equivalent methods and subsequent analysis of imputed data sets based on complex sample design data.
Patricia Berglund, University of Michigan
Retailers proactively seek a data-driven approach to provide customized product recommendations to guarantee sales increase and customer loyalty. Product affinity models have been recognized as one of the vital tools for this purpose. The algorithm assigns a customer to a product affinity group when the likelihood of purchasing is the highest and the likelihood meets the minimum and absolute requirement. However, in practice, valuable customers, up to 30% of the total universe, who buy across multiple product categories with two or more balanced product affinity likelihoods, are undefined and unable to be effectively product recommended. This paper presents multiple product affinity models that are developed using SAS® macro language to address the problem. In this paper, we demonstrate how the innovative assignment algorithm successfully assigns the undefined customers to appropriate multiple product affinity groups using nationwide retailer transactional data. In addition, the result shows that potential customers establish loyalty through migration from a single to multiple product affinity groups. This comprehensive and insightful business solution will be shared in this paper. Also, this paper provides a clustering algorithm and nonparametric tree model for model building. The customer assignment for using SAS macro code is provided in an appendix.
Hsin-Yi Wang, Alliance Data Systems
Differential item functioning (DIF), as an assessment tool, has been widely used in quantitative psychology, educational measurement, business management, insurance, and health care. The purpose of DIF analysis is to detect response differences of items in questionnaires, rating scales, or tests across different subgroups (for example, gender) and to ensure the fairness and validity of each item for those subgroups. The goal of this paper is to demonstrate several ways to conduct DIF analysis by using different SAS® procedures (PROC FREQ, PROC LOGISITC, PROC GENMOD, PROC GLIMMIX, and PROC NLMIXED) and their applications. There are three general methods to examine DIF: generalized Mantel-Haenszel (MH), logistic regression, and item response theory (IRT). The SAS® System provides flexible procedures for all these approaches. There are two types of DIF: uniform DIF, which remains consistent across ability levels, and non-uniform DIF, which varies across ability levels. Generalized MH is a nonparametric method and is often used to detect uniform DIF while the other two are parametric methods and examine both uniform and non-uniform DIF. In this study, I first describe the underlying theories and mathematical formulations for each method. Then I show the SAS statements, input data format, and SAS output for each method, followed by a detailed demonstration of the differences among the three methods. Specifically, PROC FREQ is used to calculate generalized MH only for dichotomous items. PROC LOGISITIC and PROC GENMOD are used to detect DIF by using logistic regression. PROC NLMIXED and PROC GLIMMIX are used to examine DIF by applying an exploratory item response theory model. Finally, I use SAS/IML® to call two R packages (that is, difR and lordif) to conduct DIF analysis and then compare the results between SAS procedures and R packages. An example data set, the Verbal Aggression assessment, which includes 316 subjects and 24 items, is used in this stud
y. Following the general DIF analysis, the male group is used as the reference group, and the female group is used as the focal group. All the analyses are conducted by SAS® 9.3 and R 2.15.3. The paper closes with the conclusion that the SAS System provides different flexible and efficient ways to conduct DIF analysis. However, it is essential for SAS users to understand the underlying theories and assumptions of different DIF methods and apply them appropriately in their DIF analyses.
Yan Zhang, Educational Testing Service
You might be familiar with or experienced in writing or running reports using PROC REPORT, PROC TABULATE, or other methods of report generation. These reporting methods are often very flexible, but they can be limited in the statistics that are available as options for inclusion in the resulting output. SAS® provides the capability to produce a variety of statistics through Base SAS® and SAS/STAT® procedures by using ODS OUTPUT. These procedures include statistics from PROC CORR, PROC FREQ, and PROC UNIVARIATE in Base SAS, as well as PROC GLM, PROC LIFETEST, PROC MIXED, PROC LOGISTIC, and PROC TTEST in SAS/STAT. A number of other procedures can also produce useful ODS OUTPUT objects. Commonly requested statistics for reports include p-values, confidence intervals, and test statistics. These values can be computed with the appropriate procedure, and then use ODS OUTPUT to output the desired information to a data set and include the new information with the other data used to produce the report. Examples that demonstrate how to easily generate the desired statistics or other information and include it to produce the requested final reports are provided and discussed.
Debbie Buck, inVentiv Health Clinical
There are times when the objective is to provide a summary table and graph for several quality improvement measures on a single page to allow leadership to monitor the performance of measures over time. The challenges were to decide which SAS® procedures to use, how to integrate multiple SAS procedures to generate a set of plots and summary tables within one page, and how to determine whether to use box plots or series plots of means or medians. We considered the SGPLOT and SGPANEL procedures, and Graph Template Language (GTL). As a result, given the nature of the request, the decision led us to use GTL and the SGRENDER procedure in the %BXPLOT2 macro. For each measure, we used the BOXPLOTPARM statement to display a series of box plots and the BLOCKPLOT statement for a summary table. Then we used the LAYOUT OVERLAY statement to combine the box plots and summary tables on one page. The results display a summary table (BLOCKPLOT) above each box plot series for each measure on a single page. Within each box plot series, there is an overlay of a system-level benchmark value and a series line connecting the median values of each box plot. The BLOCKPLOT contains descriptive statistics per time period illustrated in the associated box plot. The discussion points focus on techniques for nesting the lattice overlay with box plots and BLOCKPLOTs in GTL and some reasons for choosing box plots versus series plots of medians or means.
Greg Stanek, Fannie Mae
This paper describes the new features added to the macro facility in SAS® 9.3 and SAS® 9.4. New features described include the /READONLY option for macro variables, the %SYSMACEXIST macro function, the %PUT &= feature, and new automatic macro variables such as &SYSTIMEZONEOFFSET.
Rick Langston, SAS
This paper takes you through the steps for ways to modernize your analytical business processes using SAS® Decision Manager, a centrally managed, easy-to-use interface designed for business users. See how you can manage your data, business rules, and models, and then combine those components to test and deploy as flexible decisions options within your business processes. Business rules, which usually exist today in SAS® code, Java code, SQL scripts, or other types of scripts, can be managed as corporate assets separate from the business process. This will add flexibility and speed for making decisions as policies, customer base, market conditions, or other business requirements change. Your business can adapt quickly and still be compliant with regulatory requirements and support overall process governance and risk. This paper shows how to use SAS Decision Manager to build business rules using a variety of methods including analytical methods and straightforward explicit methods. In addition, we demonstrate how to manage or monitor your operational analytical models by using automation to refresh your models as data changes over time. Then we show how to combine your data, business rules, and analytical models together in a decision flow, test it, and learn how to deploy in batch or real time to embed decision results directly into your business applications or processes at the point of decision.
Steve Sparano, SAS
Well, Hadoop community, now that you have your data in Hadoop, how are you staging your analytical base tables? In my discussions with clients about this, we all agree on one thing: Data sizes stored in Hadoop prevent us from moving that data to a different platform in order to generate the analytical base tables. To address this dilemma, I want to introduce to you the SAS® In-Database Code Accelerator for Hadoop.
Steven Sober, SAS
Donna DeCapite, SAS
The DS2 programming language was introduced as part of the SAS® 9.4 release. Although this new language introduced many significant advancements, one of the most overlooked features is the addition of object-oriented programming constructs. Specifically, the addition of user-defined packages and methods enables programmers to create their own objects, greatly increasing the opportunity for code reuse and decreasing both development and QA duration. In addition, using this object-oriented approach provides a powerful design methodology where objects closely resemble the real-world entities that they model, leading to programs that are easier to understand and maintain. This paper introduces the object-oriented programming paradigm in a three-step manner. First, the key object-oriented features found in the DS2 language are introduced, and the value each provides is discussed. Next, these object-oriented concepts are demonstrated through the creation of a blackjack simulation where the players, the dealer, and the deck are modeled and coded as objects. Finally, a credit risk scoring object is presented to demonstrate the application of this approach in a real-world setting.
Shaun Kaufmann, Farm Credit Canada
SAS® Visual Analytics provides users with a unique view of their company by monitoring products, and identifying opportunities and threats, making it possible to hold recommendations, set a price strategy, and accelerate or brake product growth. In SAS Visual Analytics, you can see in one report the return required, a competitor analysis, and a comparison of realized results versus predicted results. Reports can be used to obtain a vision of the whole company and include several hierarchies (for example, by business unit, by segment, by product, by region, and so on). SAS Visual Analytics enables senior executives to easily and quickly view information. You can also use tracking indicators that are used by the insurance market.
Jacqueline Fraga, SulAmerica Cia Nacional de Seguros
EBI administrators who are new to SAS® Visual Analytics and used to the logging capability of the SAS® OLAP Server might be wondering how they can get their SAS® LASR™ Analytic Server to produce verbose log files. While the SAS LASR Analytic Server logs differ from those produced by the SAS OLAP Server, the SAS LASR Analytic Server log contains information about each request made to LASR tables and can be a great data source for administrators looking to learn more about how their SAS Visual Analytics deployments are being used. This session will discuss how to quickly enable logging for your SAS LASR Analytic Server in SAS Visual Analytics 6.4. You will see what information is available to a SAS administrator in these logs, how they can be parsed into data sets with SAS code, then loaded back into the SAS LASR Analytic Server to create SAS Visual Analytics explorations and reports.
Chris Vincent, Western Kentucky University
Use SAS® to communicate with your colleagues and customers anywhere in the world, even if you do not speak the same language! In today's global economy, most of us can no longer assume that everyone in our company has an office in the same building, works in the same country, or speaks the same language. While it is vital to quickly analyze and report on large amounts of data, we must present our reports in a way that our readers can understand. New features in SAS® Visual Analytics 7.1 give you the power to generate reports quickly and translate them easily so that your readers can comprehend the results. This paper describes how SAS® Visual Analytics Designer 7.1 delivers the Power to Know® in the language preferred by the report reader!
Will Ballard, SAS
The SAS® Environment Manager Service Architecture expands on the core monitoring capabilities of SAS® Environment Manager delivered in SAS® 9.4. Multiple sources of data available in the SAS® Environment Manager Data Mart--traditional operational performance metrics, events, and ARM, audit, and access logs--together with built-in and custom reports put powerful capabilities into the hands of IT operations. This paper introduces the concept of service-oriented even identification and discusses how to use the new architecture and tools effectively as well as the wealth of data available in the SAS Environment Manager Data Mart. In addition, extensions for importing new data, writing custom reports, instrumenting batch SAS® jobs, and leveraging and extending auditing capabilities are explored.
Bob Bonham, SAS
Bryan Ellington, SAS
Walt Disney World Resort is home to four theme parks, two water parks, five golf courses, 26 owned-and-operated resorts, and hundreds of merchandise and dining experiences. Every year millions of guests stay at Disney resorts to enjoy the Disney Experience. Assigning physical rooms to resort and hotel reservations is a key component to maximizing operational efficiency and guest satisfaction. Solutions can range from automation to optimization programs. The volume of reservations and the variety and uniqueness of guest preferences across the Walt Disney World Resort campus pose an opportunity to solve a number of reasonably difficult room assignment problems by leveraging operations research techniques. For example, a guest might prefer a room with specific bedding and adjacent to certain facilities or amenities. When large groups, families, and friends travel together, they often want to stay near each other using specific room configurations. Rooms might be assigned to reservations in advance and upon request at check-in. Using mathematical programming techniques, the Disney Decision Science team has partnered with the SAS® Advanced Analytics R&D team to create a room assignment optimization model prototype and implement it in SAS/OR®. We describe how this collaborative effort has progressed over the course of several months, discuss some of the approaches that have proven to be productive for modeling and solving this problem, and review selected results.
HAINING YU, Walt Disney Parks & Resorts
Hai Chu, Walt Disney Parks & Resorts
Tianke Feng, Walt Disney Parks & Resorts
Matthew Galati, SAS
Ed Hughes, SAS
Ludwig Kuznia, Walt Disney Parks & Resorts
Rob Pratt, SAS
SAS® data sets have PROC DATASETS, and SAS catalogs have PROC CATALOG. Find out what the little-known PROC CATALOG can do for you!
Louise Hadden, Abt Associates Inc.
The task was to produce a figure legend that gave the quintile ranges of a continuous measure corresponding to each color on a five-color choropleth US map. Actually, we needed to produce the figures and associated legends for several dozen maps for several dozen different continuous measures and time periods, as well as create the associated alt text for compliance with Section 508. So, the process needed to be automated. A method was devised using PROC RANK to generate the quintiles, PROC SQL to get the data value ranges within each quintile, and PROC FORMAT (with the CNTLIN= option) to generate and store the legend labels. The resulting data files and format catalogs were used to generate both the maps (with legends) and associated alt text. Then, these processes were rolled into a macro to apply the method for the many different maps and their legends. Each part of the method is quite simple--even mundane--but together, these techniques enabled us to standardize and automate an otherwise very tedious process. The same basic strategy could be used whenever you need to dynamically generate data buckets and keep track of the bucket boundaries (for producing labels, map legends, or alt text or for benchmarking future data against the stored categories).
Christianna Williams, Self-Employed
Louise Hadden, Abt Associates Inc.
One of the fascinating features of SAS® is that the software often provides multiple ways to accomplish the same task. A perfect example of this is the aggregation and summarization of data across multiple rows or BY groups of interest. These groupings can be study participants, time periods, geographical areas, or just about any type of discrete classification that you want. While many SAS programmers might be accustomed to accomplishing these aggregation tasks with PROC SUMMARY (or equivalently, PROC MEANS), PROC SQL can also do a bang-up job of aggregation--often with less code and fewer steps. This step-by-step paper explains how to use PROC SQL for a variety of summarization and aggregation tasks. It uses a series of concrete, task-oriented examples to do so. The presentation style issimilar to that used in the author's previous paper, PROC SQL for DATA Step Die-Hards.'
Christianna Williams, Self-Employed
It is a common task to reshape your data from long to wide for the purpose of reporting or analytical modeling and PROC TRANSPOSE provides a convenient way to accomplish this. However, when performing the transpose action on large tables stored in a database management system (DBMS) such as Teradata, the performance of PROC TRANSPOSE can be significantly compromised. In this case, it is more efficient for the DBMS to perform the transpose task. SAS® provides in-database processing technology in PROC SQL, which allows the SQL explicit pass-through method to push some or all of the work to the DBMS. This technique has facilitated integration between SAS and a wide range of data warehouses and databases, including Teradata, EMC Greenplum, IBM DB2, IBM Netezza, Oracle, and Aster Data. This paper uses the Teradata database as an example DBMS and explains how to transpose a large table that resides in it using the SQL explicit pass-through method. The paper begins with comparing the execution time using PROC TRANSPOSE with the execution time using SQL explicit pass-through. From this comparison, it is clear that SQL explicit pass-through is more efficient than the traditional PROC TRANSPOSE when transposing Teradata tables, especially large tables. The paper explains how to use the SQL explicit pass-through method and discusses the types of data columns that you might need to transpose, such as numeric and character. The paper presents a transpose solution for these types of columns. Finally, the paper provides recommendations on packaging the SQL explicit pass-through method by embedding it in a macro. SAS programmers who are working with data stored in an external DBMS and who would like to efficiently transpose their data will benefit from this paper.
Tao Cheng, Accenture
If your data do not meet the assumptions for a standard parametric test, you might want to consider using a permutation test. By randomly shuffling the data and recalculating a test statistic, a permutation test can calculate the probability of getting a value equal to or more extreme than an observed test statistic. With the power of matrices, vectors, functions, and user-defined modules, the SAS/IML® language is an excellent option. This paper covers two examples of permutation tests: one for paired data and another for repeated measures analysis of variance. For those new to SAS/IML® software, this paper offers a basic introduction and examples of how effective it can be.
John Vickery, North Carolina State University
Do you have reports based on SAS/GRAPH® procedures, customized with multiple GOPTIONS? Do you dream of those same graphs existing in a GOPTIONS and ANNOTATE-free world? Re-creating complex graphs using statistical graphics (SG) procedures is not only possible, but much easier than you think! Using before and after examples, I discuss how the graphs were created using the combination of Graph Template Language (GTL) and the SG procedures. This method produces graphs that are nearly indistinguishable from the original. This method simplifies the code required to make complex graphs, allows for maximum re-usability for graphics code, and enables changes to cascade to multiple reports simultaneously.
Julie VanBuskirk, Nurtur
Evaluation of the impact of critical or high-risk events or periods in longitudinal studies of growth might provide clues to the long-term effects of life events and efficacies of preventive and therapeutic interventions. Conventional linear longitudinal models typically involve a single growth profile to represent linear changes in an outcome variable across time, which sometimes does not fit the empirical data. The piecewise linear mixed-effects models allow different linear functions of time corresponding to the pre- and post-critical time point trends. This presentation shows: 1) how to perform piecewise linear mixed effects models using SAS step by step, in the context of a clinical trial with two-arm interventions and a predictive covariate of interest; 2) how to obtain the slopes and corresponding p-values for intervention and control groups during pre- and post-critical periods, conditional on different values of the predictive covariate; and 3) explains how to make meaningful comparisons and present results in a scientific manuscript. A SAS macro to generate the summary tables assisting the interpretation of the results is also provided.
Qinlei Huang, St Jude Children's Research Hospital
Okay, you've read all the books, manuals, and papers and can produce graphics with SAS/GRAPH® and Output Delivery System (ODS) Graphics with the best of them. But how do you handle the Final Mile problem--getting your images generated in SAS® sized just right and positioned just so in Microsoft Excel? This paper presents a method of doing so that employs SAS Integration Technologies and Excel Visual Basic for Applications (VBA) to produce SAS graphics and automatically embed them in Excel worksheets. This technique might be of interest to all skill levels. It uses Base SAS®, SAS/GRAPH, ODS Graphics, the SAS macro facility, SAS® Integration Technologies, Microsoft Excel, and VBA.
Ted Conway, Self
Many companies use geographically dispersed data centers running SAS® Grid Manager to provide 24/7 SAS® processing capability with the thought that if a disaster takes out one of the data centers, another data center can take over the SAS processing. To accomplish this, careful planning must take into consideration hardware, software, and communication infrastructure along with the SAS workload. This paper looks into some of the options available, focusing on using SAS Grid Manager to manage the disaster workload shift.
Glenn Horton, SAS
Cheryl Doninger, SAS
Doug Haigh, SAS
SAS® Simulation Studio, a component of SAS/OR® software for Microsoft Windows environments, provides powerful and versatile capabilities for building, executing, and analyzing discrete-event simulation models in a graphical environment. Its object-oriented, drag-and-drop modeling makes building and working with simulation models accessible to novice users, and its broad range of model configuration options and advanced capabilities makes SAS Simulation Studio suitable also for sophisticated, detailed simulation modeling and analysis. Although the number of modeling blocks in SAS Simulation Studio is small enough to be manageable, the number of ways in which they can be combined and connected is almost limitless. This paper explores some of the modeling methods and constructs that have proven most useful in practical modeling with SAS Simulation Studio. SAS has worked with customers who have applied SAS Simulation Studio to measure, predict, and improve system performance in many different industries, including banking, public utilities, pharmaceuticals, manufacturing, prisons, hospitals, and insurance. This paper looks at some discrete-event simulation modeling needs that arise in specific settings and some that have broader applicability, and it considers the ways in which SAS Simulation Studio modeling can meet those needs.
Ed Hughes, SAS
Emily Lada, SAS
Researchers, patients, clinicians, and other health-care industry participants are forging new models for data-sharing in hopes that the quantity, diversity, and analytic potential of health-related data for research and practice will yield new opportunities for innovation in basic and translational science. Whether we are talking about medical records (for example, EHR, lab, notes), administrative data (claims and billing), social (on-line activity), behavioral (fitness trackers, purchasing patterns), contextual (geographic, environmental), or demographic data (genomics, proteomics), it is clear that as health-care data proliferates, threats to security grow. Beginning with a review of the major health-care data breeches in our recent history, we highlight some of the lessons that can be gleaned from these incidents. In this paper, we talk about the practical implications of data sharing and how to ensure that only the right people have the right access to the right level of data. To that end, we explore not only the definitions of concepts like data privacy, but we discuss, in detail, methods that can be used to protect data--whether inside our organization or beyond its walls. In this discussion, we cover the fundamental differences between encrypted data, 'de-identified', 'anonymous', and 'coded' data, and methods to implement each. We summarize the landscape of maturity models that can be used to benchmark your organization's data privacy and protection of sensitive data.
Greg Nelson, ThotWave
Predictions, including regressions and classifications, are the predominant focus of many statistical and machine-learning models. However, in the era of big data, a predictive modeling process contains more than just making the final predictions. For example, a large collection of data often represents a set of small, heterogeneous populations. Identification of these sub groups is therefore an important step in predictive modeling. In addition, big data data sets are often complex, exhibiting high dimensionality. Consequently, variable selection, transformation, and outlier detection are integral steps. This paper provides working examples of these critical stages using SAS® Visual Statistics, including data segmentation (supervised and unsupervised), variable transformation, outlier detection, and filtering, in addition to building the final predictive model using methodology such as linear regressions, decision trees, and logistic regressions. The illustration data was collected from 2010 to 2014, from vehicle emission testing results.
Xiangxiang Meng, SAS
Jennifer Ames, SAS
Wayne Thompson, SAS
Sometimes you need to provide multiple administrators with the ability to manage your software. The rationale can be a need to separate roles and responsibilities (such as installer and configuration manager), changing job responsibilities, or even just covering for the primary administrator while on vacation. To meet that need, it's tempting to share the logon credentials of your SAS® installer account, but doing so can potentially compromise your security and cause a corporate audit to fail. This paper focuses on standard IT practices and utilities, explaining how to diligently manage the administration of your SAS software to help you properly ensure that access is secured and that auditability is maintained.
Rob Collum, SAS
Clifford Meyers, SAS
Dynamic pricing is a real-time strategy where corporations attempt to alter prices based on varying market demand. The hospitality industry has been doing this for quite a while, altering prices significantly during the summer months or weekends when demand for rooms is at a premium. In recent years, the sports industry has started to catch on to this trend, especially within Major League Baseball (MLB). The purpose of this paper is to explore the methodology of applying this type of pricing to the hockey ticketing arena.
Christopher Jones, Deloitte Consulting
Sabah Sadiq, Deloitte Consulting
Jing Zhao, Deloitte Consulting LLP
One of the hallmarks of a good or great SAS® program is that it requires only a minimum of upkeep. Especially for code that produces reports on a regular basis, it is preferable to minimize user and programmer input and instead have the input data drive the outputs of a program. Data-driven SAS programs are more efficient and reliable, require less hardcoding, and result in less downtime and fewer user complaints. This paper reviews three ways of building a SAS program to create regular Microsoft Excel reports; one method using hardcoded variables, another using SAS keyword macros, and the last using metadata to drive the reports.
Andrew Clapson, MD Financial Management
A bubble map can be a useful tool for identifying trends and visualizing the geographic proximity and intensity of events. This session shows how to use PROC GEOCODE and PROC GMAP to turn a data set of addresses and events into a map of the United States with scaled bubbles depicting the location and intensity of the events.
Caroline Cutting, Warren Rogers Associates
Although it does not happen every day, it is not unusual to need to place a quoted string within another quoted string. Fortunately, SAS® recognizes both single and double quote marks and either can be used within the other, which gives you the ability to have two-deep quoting. There are situations, however, where two kinds of quotes are not enough. Sometimes you need a third layer or, more commonly, you need to use a macro variable within the layers of quotes. Macro variables can be especially problematic, because they generally do not resolve when they are inside single quotes. However, this is SAS and that implies that there are several things going on at once and that there are several ways to solve these types of quoting problems. The primary goal of this presentation is to assist the programmer with solutions to the quotes-within-quotes problem with special emphasis on the presence of macro variables. The various techniques are contrasted as are the likely situations that call for these types of solutions. A secondary goal of this presentation is to help you understand how SAS works with quote marks and how it handles quoted strings. Without going into the gory details, a high-level understanding can be useful in a number of situations.
Art Carpenter, California Occidental Consultants
REST is being used across the industry for designing networked applications to provide lightweight and powerful alternatives to web services such as SOAP and Web Services Description Language (WSDL). Since REST is based entirely around HTTP, SAS® provides everything you need to make REST calls and process structured and unstructured data alike. Learn how PROC HTTP and other SAS language features provide everything you need to simply and securely make use of REST.
Joseph Henry, SAS
Retrospective case-control studies are frequently used to evaluate health care programs when it is not feasible to randomly assign members to a respective cohort. Without randomization, observational studies are more susceptible to selection bias where the characteristics of the enrolled population differ from those of the entire population. When the participant sample is different from the comparison group, the measured outcomes are likely to be biased. Given this issue, this paper discusses how propensity score matching and random effects techniques can be used to reduce the impact selection bias has on observational study outcomes. All results shown are drawn from an ROI analysis using a participant (cases) versus non-participant (controls) observational study design for a fitness reimbursement program aiming to reduce health care expenditures of participating members.
Jess Navratil-Strawn, Optum
Streaming data is becoming more and more prevalent. Everything is generating data now--social media, machine sensors, the 'Internet of Things'. And you need to decide what to do with that data right now. And 'right now' could mean 10,000 times or more per second. SAS® Event Stream Processing provides an infrastructure for capturing streaming data and processing it on the fly--including applying analytics and deciding what to do with that data. All in milliseconds. There are some basic tenets on how SAS® provides this extremely high-throughput, low-latency technology to meet whatever streaming analytics your company might want to pursue.
Diane Hatcher, SAS
Jerry Baulier, SAS
Steve Sparano, SAS
SAS® Visual Analytics provides numerous capabilities to analyze data lightning fast and make key business decisions that are critical for day-to-day operations. Depending on your organization, be it Human Resources, Sales, or Finance, the data can be easily mined by decision makers, providing information that empowers the user to make key business decisions. The right data preparation during report development is the key to success. SAS Visual Analytics provides the ability to explore the data and to make forecasts using automatic charting capabilities with a simple click-and-choose interface. The ability to load all the historical data into memory enables you to make decisions by analyzing the data patterns. The decision is within reach when the report designer uses SAS® Visual Analytics Designer functionality like alerts, display rules, ranks, comments, and others. Planning your data preparation task is critical for the success of the report. Identify the category and measure values in the source data, and convert them appropriately, based on your planned usage. SAS Visual Analytics has capabilities that help perform conversion on the fly. Creating meaningful derived variables on the go and hierarchies on the run reduces development time. Alerts notifications are sent to the right decision makers by e-mail when the report objects contain data that meets certain criteria. The system monitors the data, and the report developer can specify how frequently the system checks are made and the frequency at which the notifications are sent. Display rules help in highlighting the right metrics to leadership, which helps focus the decision makers on the right metric in the data maze. For example, color coding the metrics quickly tells the report user which business problems require action. Ranking the metrics, such as top 10 or bottom 10, can help the decision makers focus on a success or on problem areas. They can drill into more details about why they stand out or fall b
ehind. Discussing a report metric in a particular report can be done using the comments feature. Responding to other comments can lead to the right next steps for the organization. Also, data quality is always monitored when you have actionable reports, which helps to create a responsive and reliable reporting environment.
Arun Sugumar, Kavi Associates
Vimal Raj Arockiasamy, Kavi Associates
The IN operator within the DATA step is used for searching a specific variable for some values, either numeric or character (for example, 'if X in (2), then...'). This brief note explains how the opposite situation can be managed. That is, it explains how to search for a specific value in several variables through applying an array and the IN operator together.
Can Tongur, Statistics Sweden
Given the challenges of data security in today's business environment, how can you protect the data that is used by SAS® Visual Analytics? SAS® has implemented security features in its widely used business intelligence platform, including row-level security in SAS Visual Analytics. Row-level security specifies who can access particular rows in a LASR table. Throughout this paper, we discuss two ways of implementing row-level security for LASR tables in SAS® Visual Analytics--interactively and in batch. Both approaches link table-based permission conditions with identities that are stored in metadata.
Zuzu Williams, SAS
It is a safe assumption that almost every SAS® user learns how to use the SET statement not long after they're taught the concept of a DATA step. Further, it would probably be reasonable to guess that almost everyone of those people covered the MERGE statement soon afterwards. Many, maybe most, also got to try the UPDATE and/or MODIFY statements eventually, as well. It would also be a safe assumption that very few people have taken the time to review the manual since they've learned about those statements. That is most unfortunate, because there are so many options available to the users that can assist them in their tasks of obtaining and combining data sets. This presentation is designed to build onto the basic understanding of SET, MERGE, and UPDATE. It assumes that the attendee or reader has a basic knowledge of those statements, and it introduces various options and usages that extend the utility of these basic commands.
Andrew Kuligowski, HSN
Join us for lunch as we discuss the benefits of being part of the elite group that is SAS Certified Professionals. The SAS Global Certification program has awarded more than 79,000 credentials to SAS users across the globe. Come listen to Terry Barham, Global Certification Manager, give an overview of the SAS Certification program, explain the benefits of becoming SAS certified and discuss exam preparation tips. This session will also include a Q&A section where you can get answers to your SAS Certification questions.
SAS9.4m2 brings even more progress to the interoperability between SAS and Hadoop - the industry standard for Big Data. This talk brings you up to date with where we are : - more distributions, more data types, more options :-) You now have the choice to run a stand-alone cluster or to co-mingle your SAS processing on your general purpose clusters managing it all with YARN. Come and learn about our new developments and get a glimpse of what we are looking at in the future.
Paul Kent, SAS
The real promise of the 'Cloud' is to make things easy. Easy for IT to manage. Easy for users to use. Easy for self-service deployments and access to software. SAS's approach to the Cloud is all about 'easy'. See what you can do today in the Cloud, and where we are going. What 'easy' do you want with SAS?
Diane Hatcher, SAS
SAS® formats can be used in so many different ways! Even the most basic SAS format use (modifying the way a SAS data value is displayed without changing the underlying data value) holds a variety of nifty tricks, such as nesting formats, formats that affect various style attributes, and conditional formatting. Add in picture formats, multi-label formats, using formats for data cleaning, and formats for joins and table look-ups, and we have quite a bag of tricks for the humble SAS format and PROC FORMAT, which are used to generate them. This paper describes a few very useful programming techniques that employ SAS formats. While this paper will be appropriate for the newest SAS user, it will also focus on some of the lesser-known features of formats and PROC FORMAT and so should be useful for even quite experienced users of SAS.
Christianna Williams, Self-Employed
The latest release of SAS/STAT® software brings you powerful techniques that will make a difference in your work, whether your data are massive, missing, or somewhere in the middle. New imputation software for survey data adds to an expansive array of methods in SAS/STAT for handling missing data, as does the production version of the GEE procedure, which provides the weighted generalized estimating equation approach for longitudinal studies with dropouts. An improved quadrature method in the GLIMMIX procedure gives you accelerated performance for certain classes of models. The HPSPLIT procedure provides a rich set of methods for statistical modeling with classification and regression trees, including cross validation and graphical displays. The HPGENSELECT procedure adds support for spline effects and lasso model selection for generalized linear models. And new software implements generalized additive models by using an approach that handles large data easily. Other updates include key functionality for Bayesian analysis and pharmaceutical applications.
Maura Stokes, SAS
Bob Rodriguez, SAS
For the many relational database products that SAS/ACCESS®supports (Oracle, Teradata, DB2, MySQL, SQL Server, Hadoop, Greenplum, PC Files, to name but a few), there are a myriad of RDBMS-specific options at your disposal, but how do you know the right options for any given situation? How much data should you transfer at a time? Which SAS® functions can be passed through to the database--which cannot? How do you verify that your processes are running efficiently? How do you test and validate any changes? The answer lies with the feedback capabilities of the SASTRACE system option.
Andrew Howell, ANJ Solutions
SAS® Analytics enables organizations to tackle complex business problems using big data and to provide insights needed to make critical business decisions. A well-architected enterprise storage infrastructure is needed to realize the full potential of SAS Analytics. However, as the need for big data analytics and rapid response times increases, the performance gap between server speeds and traditional hard disk drive (HDD) based storage systems can be a significant concern. The growing performance gap can have detrimental effects, particularly when it comes to critical business applications. As a result, organizations are looking for newer, smarter, faster storage systems to accelerate business insights. IBM FlashSystem Storage systems store the data in flash memory. They are designed for dramatically faster access times and support incredible amounts of input/output operations per second (IOPS) and throughput, with significantly lower latency than HDD-based solutions. Due to their macro-efficiency design, FlashSystem Storage systems consume less power and have significantly lower cooling and space requirements, while allowing server processors to run SAS Analytics more efficiently. Being an all-flash storage system, IBM FlashSystem provides consistent low latency response across IOPS range, as the analytics workload scales. This paper introduces the benefits of IBM FlashSystem Storage for deploying SAS Analytics and highlights some of the deployment scenarios and architectural considerations. This paper also describes best practices and tuning guidelines for deploying SAS Analytics on FlashSystem Storage systems, which would help SAS Analytics customers in architecting solutions with FlashSystem Storage.
David Gimpl, IBM
Matt Key, IBM
Narayana Pattipati, IBM
Harry Seifert, IBM
Join us for lunch with SAS® Press Authors! They will share some of their favorite SAS® tips, summarize their experiences in writing a SAS Press title, and discuss how to become a published author. Bring your questions! There will be an opportunity for you to learn more about the publishing process and forthcoming titles, to chat with the Acquisition Editors, and to win SAS Press books.
In today's competitive job market, both recent graduates and experienced professionals are looking for ways to set themselves apart from the crowd. SAS® certification is one way to do that. SAS Institute Inc. offers a range of exams to validate your knowledge level. In writing this paper, we have drawn upon our personal experiences, remarks shared by new and longtime SAS users, and conversations with experts at SAS. We discuss what certification is and why you might want to pursue it. Then we share practical tips you can use to prepare for an exam and do your best on exam day.
Andra Northup, Advanced Analytic Designs, Inc.
Susan Slaughter, Avocet Solutions
Can you create hundreds of great looking Microsoft Excel tables all within SAS® and make them all Section 508 compliant at the same time? This paper examines how to use the ODS TAGSETS.EXCELXP statement and other Base SAS® features to create fantastic looking Excel worksheet tables that are all Section 508 compliant. This paper demonstrates that there is no need for any outside intervention or pre- or post-meddling with the Excel files to make them Section 508 compliant. We do it all with simple Base SAS code.
Chris Boniface, U.S. Census Bureau
Chris Boniface, U.S. Census Bureau
When planning for a journey, one of the main goals is to get the best value possible. The same thing could be said for your corporate data as it journeys through the data management process. It is your goal to get the best data in the hands of decision makers in a timely fashion, with the lowest cost of ownership and the minimum number of obstacles. The SAS® Data Management suite of products provides you with many options for ensuring value throughout the data management process. The purpose of this session is to focus on how the SAS® Data Management solution can be used to ensure the delivery of quality data, in the right format, to the right people, at the right time. The journey is yours, the technology is ours--together, we can make it a fulfilling and rewarding experience.
Mark Craver, SAS
First introduced in 2013, the Cloudera Data Science Challenge is a rigorous competition in which candidates must provide a solution to a real-world big data problem that surpasses a benchmark specified by some of the world's elite data scientists. The Cloudera Data Science Challenge 2 (in 2014) involved detecting anomalies in the United States Medicare insurance system. Finding anomalous patients, procedures, providers, and regions in the competition's large, complex, and intertwined data sets required industrial-strength tools for data wrangling and machine learning. This paper shows how I did it with SAS®.
Patrick Hall, SAS
Customer expectations are set high when Microsoft Excel and Microsoft PowerPoint are used to design reports. Using SAS® for reporting has benefits because it generates plots directly from prepared data sets, automates the plotting process, minimizes labor-intensive manual construction using Microsoft products, and does not compromise the presentation value. SAS® Enterprise Guide® 5.1 has a powerful point-and-click method that is quick and easy to use. However, it is limited in its ability to customize the output to mimic manually created Microsoft graphics. This paper demonstrates why SAS Enterprise Guide is the perfect starting point for creating initial code for plots using SAS/GRAPH® point-and-click features and how the code can be enhanced using established PROC GPLOT, ANNOTATE, and ODS options to re-create the look and feel of plots generated by Excel and PowerPoint. Examples show the generation of plots and tables using PROC TABULATE to embed the plot data into the graphical output. Also included are tips for overcoming the ODS limitation of SAS® 9.3, which is used by SAS Enterprise Guide 5.1, to transfer the SAS graphical output to PowerPoint files. These SAS® 9.3 tips are contrasted with the new SAS® 9.4 ODS POWERPOINT statement that enables direct PowerPoint file creation from a SAS program.
Christopher Klekar, Baylor Scott and White Health
Gabriela Cantu, Baylor Scott &White Health
The Query Builder in SAS® Enterprise Guide® is an excellent point-and-click tool that generates PROC SQL code and creates queries in SAS®. This hands-on workshop will give an overview of query options, sorting, simple and complex filtering, and joining tables. It is a great workshop for programmers and non-programmers alike.
Jennifer First-Kluge, Systems Seminar Consultants
A good system should embody the following characteristics: planned, maintainable, flexible, simple, accurate, restartable, reliable, reusable, automated, documented, efficient, modular, and validated. This is true of any system, but how to implement this in SAS® Enterprise Guide® is a unique endeavor. We provide a brief overview of these characteristics and then dive deeper into how a SAS Enterprise Guide user should approach developing both ad hoc and production systems.
Steven First, Systems Seminar Consultants
Jennifer First-Kluge, Systems Seminar Consultants
SAS® Enterprise Guide® is an extremely valuable tool for programmers. But it should also be leveraged by managers and executives to do data exploration, get information on the fly, and take advantage of the powerful analytics and reporting that SAS® has to offer. This can all be done without learning to program. This paper will overview how SAS Enterprise Guide can improve the process of turning real-time data into real-time business decisions by managers.
Jennifer First-Kluge, Systems Seminar Consultants
SAS® Studio (previously known as SAS® Web Editor) was introduced in the first maintenance release of SAS® 9.4 as an alternative programming environment to SAS® Enterprise Guide® and SAS® Display Manager. SAS Studio is different in many ways from SAS Enterprise Guide and SAS Display Manager. As a programmer, I currently use SAS Enterprise Guide to help me code, test, maintain, and organize my SAS® programs. I have SAS Display Manager installed on my PC, but I still prefer to write my programs in SAS Enterprise Guide because I know it saves my log and output whenever I run a program, even if that program crashes and takes the SAS session with it! So should I now be using SAS Studio instead, and should you be using it, too?
Philip Holland, Holland Numerics Limited
Wouldn't it be great if there were a way to deploy SAS® Grid Manager in discrete building blocks that have the proper balance of compute capability, RAM, and IO throughput? Well, now you can! This paper discusses the attributes of a well-designed SAS Grid Manager deployment and why it is sometimes difficult to engineer such an environment when IT responsibilities are segregated between server administration, network administration, and storage administration. The paper presents a concrete design that will position the customer for a successful SAS Grid Manager deployment of any size and that can also scale out easily as the needs of the organization grow.
Ken Gahagan, SAS
The purpose behind this paper is to provide a high-level overview of how SAS® security works in a way that can be communicated to both SAS administrators and users who are not familiar with SAS. It is not uncommon to hear SAS administrators complain that their IT department and users just don't 'get' it when it comes to metadata and security. For the administrator or user not familiar with SAS, understanding how SAS interacts with the operating system, the file system, external databases, and users can be confusing. Based on a SAS® Enterprise Office Analytics installation in a Windows environment, this paper walks the reader through all of the basic metadata relationships and how they are created, thus unraveling the mystery of how the host system, external databases, and SAS work together to provide users what they need, while reliably enforcing the appropriate security.
Charyn Faenza, F.N.B. Corporation
SAS® Model Manager provides an easy way to deploy analytical models into various relational databases or into Hadoop using either scoring functions or the SAS® Embedded Process publish methods. This paper gives a brief introduction of both the SAS Model Manager publishing functionality and the SAS® Scoring Accelerator. It describes the major differences between using scoring functions and the SAS Embedded Process publish methods to publish a model. The paper also explains how to perform in-database processing of a published model by using SAS applications as well as SQL code outside of SAS. In addition to Hadoop, SAS also supports these databases: Teradata, Oracle, Netezza, DB2, and SAP HANA. Examples are provided for publishing a model to a Teradata database and to Hadoop. After reading this paper, you should feel comfortable using a published model in your business environment.
Jifa Wei, SAS
Kristen Aponte, SAS
Are you a SAS® software user hoping to convince your organization to move to the latest product release? Has your management team asked how your organization can hire new SAS users familiar with the latest and greatest procedures and techniques? SAS® Studio and SAS® University Edition might provide the answers for you. SAS University Edition was created for teaching and learning. It's a new downloadable package of selected SAS products (Base SAS®, SAS/STAT®, SAS/IML®, SAS/ACCESS® Interface to PC Files, and SAS Studio) that runs on Windows, Linux, and Mac. With the exploding demand for analytical talent, SAS launched this package to grow the next generation of SAS users. Part of the way SAS is helping grow that next generation of users is through the interface to SAS University Edition: SAS Studio. SAS Studio is a developmental web application for SAS that you access through your web browser and--since the first maintenance release of SAS 9.4--is included in Base SAS at no additional charge. The connection between SAS University Edition and commercial SAS means that it's easier than ever to use SAS for teaching, research, and learning, from high schools to community colleges to universities and beyond. This paper describes the product, as well as the intent behind it and other programs that support it, and then talks about some successes in adopting SAS University Edition to grow the next generation of users.
Polly Mitchell-Guthrie, SAS
Amy Peters, SAS
Once you have a SAS® Visual Analytics environment up and running, the next important piece to the puzzle is to keep your users happy by keeping their data loaded and refreshed on a consistent basis. Loading data from the SAS Visual Analytics UI is both a great first start and great for ad hoc data exploring. But automating this data load so that users can focus on exploring the data and creating reports is where to power of SAS Visual Analytics comes into play. By using tried-and-true SAS® Data Integration Studio techniques (both out of the box and custom transforms), you can easily make this happen. Proven techniques such as sweeping from a source library and stacking similar Hadoop Distributed File System (HDFS) tables into SAS® LASR™ Analytic Server for consumption by SAS Visual Analytics are presented using SAS Visual Analytics and SAS Data Integration Studio.
Jason Shoffner, SAS
Brandon Kirk, SAS
SAS® Visual Analytics is a powerful tool for exploring, analyzing, and reporting on your data. Whether you understand your data well or are in need of additional insights, SAS Visual Analytics has the capabilities you need to discover trends, see relationships, and share the results with your information consumers. This paper presents a case study applying the capabilities of SAS Visual Analytics to NCAA Division I college football data from 2005 through 2014. It follows the process from reading raw comma-separated values (csv) files through processing that data into SAS data sets, doing data enrichment, and finally loading the data into in-memory SAS® LASR™ tables. The case study then demonstrates using SAS Visual Analytics to explore detailed play-by-play data to discover trends and relationships, as well as to analyze team tendencies to develop game-time strategies. Reports on player, team, conference, and game statistics can be used for fun (by fans) and for profit (by coaches, agents and sportscasters). Finally, the paper illustrates how all of these capabilities can be delivered via the web or to a mobile device--anywhere--even in the stands at the stadium. Whether you are using SAS Visual Analytics to study college football data or to tackle a complex problem in the financial, insurance, or manufacturing industry, SAS Visual Analytics provides the power and flexibility to score a big win in your organization.
John Davis, SAS
As a risk management unit, our team has invested countless hours in developing the processes and infrastructure necessary to produce large, multipurpose analytical base tables. These tables serve as the source for our key reporting and loss provisioning activities, the output of which (reports and GL bookings) is disseminated throughout the corporation. Invariably, questions arise and further insight is desired. Traditionally, any inquiries were returned to the original analyst for further investigation. But what if there was a way for the less technical user base to gain insights independently? Now there is with SAS® Visual Analytics. SAS Visual Analytics is often thought of as a big data tool, and while it is certainly capable in this space, its usefulness in regard to leveraging the value in your existing data sets should not be overlooked. By using these tried-and-true analytical base tables, you are guaranteed to achieve one version of the truth since traditional reports match perfectly to the data being explored. SAS Visual Analytics enables your organization to share these proven data assets with an entirely new population of data consumers--people with less 'traditional data skills but with questions that need to be answered. Finally, all this is achieved without any additional data preparation effort and testing. This paper explores our experience with SAS Visual Analytics and the benefits realized.
Shaun Kaufmann, Farm Credit Canada
Data visualization can be like a GPS directing us to where in the sea of data we should spend our analytical efforts. In today's big data world, many businesses are still challenged to quickly and accurately distill insights and solutions from ever-expanding information streams. Wells Fargo CEO John Stumpf challenges us with the following: We all work for the customer. Our customers say to us, 'Know me, understand me, appreciate me and reward me.' Everything we need to know about a customer must be available easily, accurately, and securely, as fast as the best Internet search engine. For the Wells Fargo Credit Risk department, we have been focused on delivering more timely, accurate, reliable, and actionable information and analytics to help answer questions posed by internal and external stakeholders. Our group has to measure, analyze, and provide proactive recommendations to support and direct credit policy and strategic business changes, and we were challenged by a high volume of information coming from disparate data sources. This session focuses on how we evaluated potential solutions and created a new go-forward vision using a world-class visual analytics platform with strong data governance to replace manually intensive processes. As a result of this work, our group is on its way to proactively anticipating problems and delivering more dynamic reports.
Ryan Marcum, Wells Fargo Home Mortgage
This workshop provides hands-on experience using SAS® Enterprise Miner. Workshop participants will learn to: open a project, create and explore a data source, build and compare models, and produce and examine score code that can be used for deployment.
Chip Wells, SAS
This workshop provides hands-on experience using SAS® Forecast Server. Workshop participants will learn to: create a project with a hierarchy, generate multiple forecast automatically, evaluate the forecasts accuracy, and build a custom model.
Catherine Truxillo, SAS
George Fernandez, SAS
Terry Woodfield, SAS
This workshop provides hands-on experience with SAS® Data Loader for Hadoop. Workshop participants will configure SAS Data Loader for Hadoop and use various directives inside SAS Data Loader for Hadoop to interact with data in the Hadoop cluster.
Kari Richardson, SAS
This workshop provides hands-on experience with SAS® Visual Analytics. Workshop participants will explore data with SAS® Visual Analytics Explorer and design reports with SAS® Visual Analytics Designer.
Nicole Ball, SAS
This workshop provides hands-on experience with SAS® Visual Statistics. Workshop participants will learn to: move between the Visual Analytics Explorer interface and Visual Statistics, fit automatic statistical models, create exploratory statistical analysis, compare models using a variety of metrics, and create score code.
Catherine Truxillo, SAS
Xiangxiang Meng, SAS
Mike Jenista, SAS
This workshop provides hands-on experience performing statistical analysis with SAS University Edition and SAS Studio. Workshop participants will learn to: install and setup, perform basic statistical analyses using tasks, connect folders to SAS Studio for data access and results storage, invoke code snippets to import CSV data into SAS, and create a code snippet.
Danny Modlin, SAS
This workshop provides hands-on experience using SAS® Text Miner. Workshop participants will learn to: read a collection of text documents and convert them for use by SAS Text Miner using the Text Import node, use the simple query language supported by the Text Filter node to extract information from a collection of documents, use the Text Topic node to identify the dominant themes and concepts in a collection of documents, and use the Text Rule Builder node to classify documents having pre-assigned categories.
Terry Woodfield, SAS
Is your company using or considering using SAP Business Warehouse (BW) powered by SAP HANA? SAS® provides various levels of integration with SAP BW in an SAP HANA environment. This integration enables you to not only access SAP BW components from SAS, but to also push portions of SAS analysis directly into SAP HANA, accelerating predictive modeling and data mining operations. This paper explains the SAS toolset for different integration scenarios, highlights the newest technologies contributing to integration, and walks you through examples of using SAS with SAP BW on SAP HANA. The paper is targeted at SAS and SAP developers and architects interested in building a productive analytical environment with the help of the latest SAS and SAP collaborative advancements.
Tatyana Petrova, SAS
Six Sigma is a business management strategy that seeks to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes. Each Six Sigma project carried out within an organization follows a defined sequence of steps and has quantified financial targets. All Six Sigma project methodologies include an extensive analysis phase in which SAS® software can be applied. JMP® software is widely used for Six Sigma projects. However, this paper demonstrates how Base SAS® (and a bit of SAS/GRAPH® and SAS/STAT® software) can be used to address a wide variety of Six Sigma analysis tasks. The reader is assumed to have a basic knowledge of Six Sigma methodology. Therefore, the focus of the paper is the use of SAS code to produce outputs for analysis.
Dan Bretheim, Towers Watson
SAS® vApps (virtual applications) are a SAS® construct designed to logically and physically encapsulate a single- or multi-tier software solution into a virtual machine (or sometimes into multiple virtual machines). In this paper, we examine the conceptual, logical, and physical design perspectives that comprise a vApp, giving you a high-level understanding of both the technical and business benefits of vApps and the design decisions that go into envisioning and constructing SAS vApps. These are described in the context of the user roles involved in the life cycle of a vApp, and how those roles interact with a vApp at various points along its continuum.
Gary Kohan, SAS
Danny Hamrick, SAS
Connie Robison, SAS
Rob Stephens, SAS
Peter Villiers, SAS
One of the challenges in Secure Socket Layer (SSL) configuration for any web configuration is the SSL certificate management for client and server side. The SSL overview covers the structure of the x.509 certificate and SSL handshake process for the client and server components. There are three distinctive SSL client/server combinations within the SAS® Visual Analytics 7.1 web application configuration. The most common one is the browser accessing the web application. The second one is the internal SAS® web application accessing another SAS web application. The third one is a SAS Workspace Server executing a PROC or LIBNAME statement that accesses the SAS® LASR™ Authorization Service web application. Each SSL client/server scenario in the configuration is explained in terms of SSL handshake and certificate arrangement. Server identity certificate generation using Microsoft Active Directory Certificate Services (AD CS) for enterprise level organization is showcased. The certificates, in proper format, need to be supplied to the SAS® Deployment Wizard during the configuration process. The prerequisites and configuration steps are shown with examples.
Heesun Park, SAS
Jerome Hughes, SAS
Before the Internet era, you might not have come across many Sankey diagrams. These diagrams, which contain nodes and links (paths) that cross, intertwine, and have different widths, were named after Captain Sankey. He first created this type of diagram to visualize steam engine efficiency. Sankey diagrams used to have very specialized applications such as mapping out energy, gas, heat, or water distribution and flow, or cost budget flow. These days, it's become very common to display the flow of web traffic or customer actions and reactions through Sankey diagrams as well. Sankey diagrams in SAS® Visual Analytics easily enable users to create meaningful visualizations that represent the flow of data from one event or value to another. In this paper, we take a look at the components that make up a Sankey diagram: 1. Nodes; 2. Links; 3. Drop-off links; 4. A transaction. In addition, we look at a practical example of how Sankey diagrams can be used to evaluate web traffic and influence the design of a website. We use SAS Visual Analytics to demonstrate the best way to build a Sankey diagram.
Varsha Chawla, SAS
Renato Luppi, SAS
The Hadoop ecosystem is vast, and there's a lot of conflicting information available about how to best secure any given implementation. It's also difficult to fix any mistakes made early on once an instance is put into production. In this paper, we demonstrate the currently accepted best practices for securing and Kerberizing Hadoop clusters in a vendor-agnostic way, review some of the not-so-obvious pitfalls one could encounter during the process, and delve into some of the theory behind why things are the way they are.
Evan Kinney, SAS
This paper discusses the selection and transformation of continuous predictor variables for the fitting of binary logistic models. The paper has two parts: (1) A procedure and associated SAS® macro are presented that can screen hundreds of predictor variables and 10 transformations of these variables to determine their predictive power for a logistic regression. The SAS macro passes the training data set twice to prepare the transformations and one more time through PROC TTEST. (2) The FSP (function selection procedure) and a SAS implementation of FSP are discussed. The FSP tests all transformations from among a class of FSP transformations and finds the one with maximum likelihood when fitting the binary target. In a 2008 book, Patrick Royston and Willi Sauerbrei popularized the FSP.
Bruce Lund, Marketing Associates, LLC
Text messages (SMS) are a convenient way to receive notifications away from your computer screen. In SAS®, text messages can be sent to mobile phones via the DATA step. This paper briefly describes several methods for sending text messages from SAS and explores possible applications.
Matthew Slaughter, Coalition for Compassionate Care of California
Understanding organizational trends in spending can help overseeing government agencies make appropriate modifications in spending to best serve the organization and the citizenry. However, given millions of line items for organizations annually, including free-form text, it is unrealistic for these overseeing agencies to succeed by using only a manual approach to this textual data. Using a publicly available data set, this paper explores how business users can apply text analytics using SAS® Contextual Analysis to assess trends in spending for particular agencies, apply subject matter expertise to refine these trends into a taxonomy, and ultimately, categorize the spending for organizations in a flexible, user-friendly manner. SAS® Visual Analytics enables dynamic exploration, including modeling results from SAS® Visual Statistics, in order to assess areas of potentially extraneous spending, providing actionable information to the decision makers.
Tom Sabo, SAS
Little did you know that your last delivery ran on incomplete data. To make matters worse, the client realized the issue first. Sounds like a horror story, no? A few preventative measures can go a long way in ensuring that your data are up-to-date and progressing normally. At the data set level, metadata comparisons between the current and previous data cuts will help identify observation and variable discrepancies. Comparisons will also uncover attribute differences at the variable level. At the subject level, they will identify missing subjects. By compiling these comparison results into a comprehensive scheduled e-mail, a data facilitator need only skim the report to confirm that the data is good to go--or in need of some corrective action. This paper introduces a suite of checks contained in a macro that will compare data cuts in the data set, variable, and subject levels and produce an e-mail report. The wide use of this macro will help all SAS® users create better deliveries while avoiding rework.
Spencer Childress, Rho,Inc
Alexandra Buck, Rho, Inc.
SAS® functions provide amazing power to your DATA step programming. Specific functions are essential--they save you from writing volumes of unnecessary code. This presentation covers some of the most useful SAS functions. A few might be new to you and they can all change how you program and approach common programming tasks. The majority of these functions work with character data. There are functions that search for strings, others that find and replace strings, and some that join strings together. Furthermore, certain functions can measure the spelling distance between two strings (useful for fuzzy matching). Some of the newest and most incredible functions are not functions at all--they are call routines. Did you know that you can sort values within an observation? Did you know that not only can you identify the largest or smallest value in a list of variables, but you can identify the second or third or nth largest or smallest value? A knowledge of these functions will make you a better SAS programmer.
Ron Cody, Camp Verde Consulting
Your enterprise SAS® Visual Analytics implementation is on its way to being adopted throughout your organization, unleashing the production of critical business content by business analysts, data scientists, and decision makers from many business units. This content is relied upon to inform decisions and provide insight into the results of those decisions. With the development of SAS Visual Analytics content decentralized into the hands of business users, the use of automated version control is essential to providing protection and recovery in the event of inadvertent changes to that content. Re-creation of complex report objects accidentally modified by a business user is time-consuming and can be eliminated by maintaining a version control repository of report (and other) objects created in SAS Visual Analytics. This paper walks through the steps for implementing an automated process for version control using SAS®. This process can be applied to all types of metadata objects used in multiple SAS application development and analysis environments, such as reports and explorations from SAS Visual Analytics, and jobs, tables, and libraries from SAS® Data Integration Studio. Basic concepts for the process, as well as specific techniques used for our implementation are included. So eliminate the risk of content loss for your business users and the burden of manual version control for your applications developers. Your IT shop will enjoy time savings and greater reliability.
Jerry Hosking, SAS
The report looks simple enough--a bar chart and a table, like something created with GCHART and REPORT procedures. But, there are some twists to the reporting requirements that make those procedures not quite flexible enough. The solution was to mix 'old' and 'new' DATA step-based techniques to solve the problem. Annotate datasets are used to create the bar chart and the Report Writing Interface (RWI) to create the table. Without a whole lot of additional code, an extreme amount of flexibility is gained.
Pete Lund, Looking Glass Analytics
Can you actually get something for nothing? With the SAS® PROC SQL subquery and remerging features, yes, you can. When working with categorical variables, you often need to add group descriptive statistics such as group counts and minimum and maximum values for further BY-group processing. Instead of first creating the group count and minimum or maximum values and then merging the summarized data set to the original data set, why not take advantage of PROC SQL to complete two steps in one? With the PROC SQL subquery and summary functions by the group variable, you can easily remerge the new group descriptive statistics with the original data set. Now with a few DATA step enhancements, you too can include percent calculations.
Sunil Gupta, Gupta Programming
The era of mass marketing is over. Welcome to the new age of relevant marketing where whispering matters far more than shouting.' At ZapFi, using the combination of sponsored free Wi-Fi and real-time consumer analytics,' we help businesses to better understand who their customers are. This gives businesses the opportunity to send highly relevant marketing messages based on the profile and the location of the customer. It also leads to new ways to build deeper and more intimate, one-on-one relationships between the business and the customer. During this presentation, ZapFi will use a few real-world examples to demonstrate that the future of mobile marketing is much more about data and far less about advertising.
Gery Pollet, ZapFi
A prevalent problem surrounding Extract, Transform, and Load (ETL) development is the ability to apply consistent logic and manipulation of source data when migrating to target data structures. Certain inconsistencies that add a layer of complexity include, but are not limited to, naming conventions and data types associated with multiple sources, numerous solutions applied by an array of developers, and multiple points of updating. In this paper, we examine the evolution of implementing a best practices solution during the process of data delivery, with respect to standardizing data. The solution begins with injecting the transformations of the data directly into the code at the standardized layer via Base SAS® or SAS® Enterprise Guide®. A more robust method that we explore is to apply these transformations with SAS® macros. This provides the capability to apply these changes in a consistent manner across multiple sources. We further explore this solution by implementing the macros within SAS® Data Integration Studio processes on the DataFlux® Data Management Platform. We consider these issues within the financial industry, but the proposed solution can be applied across multiple industries.
Avery Long, Financial Risk Group
Frank Ferriola, Financial Risk Group
Product affinity segmentation is a powerful technique for marketers and sales professionals to gain a good understanding of customers' needs, preferences, and purchase behavior. Performing product affinity segmentation is quite challenging in practice because product-level data usually has high skewness, high kurtosis, and a large percentage of zero values. The Doughnut Clustering method has been shown to be effective using real data, and was presented at SAS® Global Forum 2013 in the paper titled Product Affinity Segmentation Using the Doughnut Clustering Approach.' However, the Doughnut Clustering method is not a panacea for addressing the product affinity segmentation problem. There is a clear need for a comprehensive evaluation of this method in order to be able to develop generic guidelines for practitioners about when to apply it. In this paper, we meet the need by evaluating the Doughnut Clustering method on simulated data with different levels of skewness, kurtosis, and percentage of zero values. We developed a five-step approach based on Fleishman's power method to generate synthetic data with prescribed parameters. Subsequently, we designed and conducted a set of experiments to run the Doughnut Clustering method as well as the traditional K-means method as a benchmark on simulated data. We draw conclusions on the performance of the Doughnut Clustering method by comparing the clustering validity metric the ratio of between-cluster variance to within-cluster variance as well as the relative proportion of cluster sizes against those of K-means.
Darius Baer, SAS
Goutam Chakraborty, Oklahoma State University
Video games used to be child's play. Today, millions of gamers of all ages kill countless in-game monsters and villains every day. Gaming is big business, and the data it generates is even bigger. Massive multi-player online games like World of Warcraft by Blizzard Entertainment not only generate data that Blizzard Entertainment can use to monitor users and their environments, but they can also be set up to log player data and combat logs client-side. Many users spend time analyzing their playing 'rotations' and use the information to adjust their playing style to deal more damage or, more appropriately, to heal themselves and other players. This paper explores World of Warcraft logs by using SAS® Visual Analytics and applies statistical techniques by using SAS® Visual Statistics to discover trends.
Mary Osborne, SAS
Adam Maness
Technology is always changing. To succeed in this ever-evolving landscape, organizations must embrace the change and look for ways to use it to their advantage. Even standard business tasks such as creating reports are affected by the rapid pace of technology. Reports are key to organizations and their customers. Therefore, it is imperative that organizations employ current technology to provide data in customized and meaningful reports across a variety of media. The SAS® Output Delivery System (ODS) gives you that edge by providing tools that enable you to package, present, and deliver report data in more meaningful ways, across the most popular desktop and mobile devices. To begin, the paper illustrates how to modify styles in your reports using the ODS CSS style engine, which incorporates the use of cascading style sheets (CSS) and the ODS document object model (DOM). You also learn how you can use SAS ODS to customize and generate reports in the body of e-mail messages. Then the paper discusses methods for enhancing reports and rendering them in desktop and mobile browsers by using the HTML and HTML5 ODS destinations. To conclude, the paper demonstrates the use of selected SAS ODS destinations and features in practical, real-world applications.
Chevell Parker, SAS
Every day, companies all over the world are moving their data into the Cloud. While there are many options available, much of this data will wind up in Amazon Redshift. As a SAS® user, you are probably wondering, 'What is the best way to access this data using SAS?' This paper discusses the many ways that you can use SAS/ACCESS® to get to Amazon Redshift. We compare and contrast the various approaches and help you decide which is best for you. Topics that are discussed are building a connection, choosing appropriate data types, and SQL functions.
James (Ke) Wang, SAS
Salman Maher, SAS
SAS® provides a complex ecosystem with multiple tools and products that run in a variety of environments and modes. SAS provides numerous error-handling and program control statements, options, and features. These features can function differently according to the run environment, and many of them have constraints and limitations. In this presentation, we review a number of potential error-handling and program control strategies that can be employed, along with some of the inherent limitations of each. The bottom line is that there is no single strategy that will work in all environments, all products, and all run modes. Instead, programmers need to consider the underlying program requirements and choose the optimal strategy for their situation.
Thomas Billings, MUFG Union Bank, N.A.
SAS® users organize their applications in a variety of ways. However, there are some approaches that are more successful, and some that are less successful. In particular, the need to process some of the code some of the time in a file is sometimes challenging. Reproducible research methods require that SAS applications be understandable by the author and other staff members. In this presentation, you learn how to organize and structure your SAS application to manage the process of data access, data analysis, and data presentation. The approach to structure applications requires that tasks in the process of data analysis be compartmentalized. This can be done using a well-defined program. The author presents his structuring algorithm, and discusses the characteristics of good structuring methods for SAS applications. Reproducible research methods are becoming more centrally important, and SAS users must keep up with the current developments.
Paul Thomas, ASUP Ltd
Most 'Design of Experiment' textbooks cover Type I, Type II, and Type III sums of squares, but many researchers and statisticians fall into the habit of using one type mindlessly. This breakout session reviews the basics and illustrates the importance of the choice of type as well as the variable definitions in PROC GLM and PROC REG.
Sheila Barron, University of Iowa
Michelle Mengeling, Comprehensive Access & Delivery Research & Evaluation-CADRE, Iowa City VA Health Care System
One of the more commonly needed operations in SAS® programming is to determine the value of one variable based on the value of another. A series of techniques and tools have evolved over the years to make the matching of these values go faster, smoother, and easier. A majority of these techniques require operations such as sorting, searching, and comparing. As it turns out, these types of techniques are some of the more computationally intensive. Consequently, an understanding of the operations involved and a careful selection of the specific technique can often save the user a substantial amount of computing resources. Many of the more advanced techniques can require substantially fewer resources. It is incumbent on the user to have a broad understanding of the issues involved and a more detailed understanding of the solutions available. Even if you do not currently have a BIG data problem, you should at the very least have a basic knowledge of the kinds of techniques that are available for your use.
Art Carpenter, California Occidental Consultants
SAS® Office Analytics, SAS® Visual Analytics, and SAS® Studio provide excellent data analysis and report generation. When these products are combined, their deep interoperability enables you to take your analysis and reporting to the next level. Build interactive reports in SAS® Visual Analytics Designer, and then view, customize and comment on them from Microsoft Office and SAS® Enterprise Guide®. Create stored processes in SAS Enterprise Guide, and then run them in SAS Visual Analytics Designer, mobile tablets, or SAS Studio. Run your SAS Studio tasks in SAS Enterprise Guide and Microsoft Office using data provided by those applications. These interoperability examples and more will enable you to combine and maximize the strength of each of the applications. Learn more about this integration between these products and what's coming in the future in this session.
David Bailey, SAS
Tim Beese, SAS
Casey Smith, SAS
Understanding the behavior of your customers is key to improving and maintaining revenue streams. It is a critical requirement in the crafting of successful marketing campaigns. Using SAS® Visual Analytics, you can analyze and explore user behavior, click paths, and other event-based scenarios. Flow visualizations help you to best understand hotspots, highlight common trends, and find insights in individual user paths or in aggregated paths. This paper explains the basic concepts of path analysis as well as provides detailed background information about how to use flow visualizations within SAS Visual Analytics.
Falko Schulz, SAS
Olaf Kratzsch, SAS
How many times has this happened to you? You create a really helpful report and share it with others. It becomes popular and you find yourself running it over and over. Then they start asking, But can't you re-run it and just change ___? (Fill in the blank with whatever simple request you can think of.) Don't you want to just put the report out as a web page with some basic parameters that users can choose themselves and run when they want? Consider writing your own task in SAS® Studio! SAS Studio includes several predefined tasks, which are point-and-click user interfaces that guide the user through an analytical process. For example, tasks enable users to create a bar chart, run a correlation analysis, or rank data. When a user selects a task option, SAS® code is generated and run on the SAS server. Because of the flexibility of the task framework, you can make a copy of a predefined task and modify it or create your own. Tasks use the same common task model and the Velocity Template Language--no Java programming or ActionScript programming is required. Once you have the interface set up to generate the SAS code you need, then you can publish the task for other SAS Studio users to use or you can use a straight URL. Now that others can generate the output themselves, you actually might have time to go fishing!
Christie Corcoran, SAS
Amy Peters, SAS
Data simulation is a fundamental tool for statistical programmers. SAS® software provides many techniques for simulating data from a variety of statistical models. However, not all techniques are equally efficient. An efficient simulation can run in seconds, whereas an inefficient simulation might require days to run. This paper presents 10 techniques that enable you to write efficient simulations in SAS. Examples include how to simulate data from a complex distribution and how to use simulated data to approximate the sampling distribution of a statistic.
Rick Wicklin, SAS
This session describes our journey from data acquisition to text analytics on clinical, textual data.
Mark Pitts, Highmark Health
Streaming data and real-time analytics are being talked about in a lot of different situations. SAS has been exploring a variety of use cases that meld together streaming data and analytics to drive immediate action. We want to share these stories with you and get your feedback on what's going on in your world, where the waiting game is no longer what's being played.
Steve Sparano, SAS
PROC MIXED is one of the most popular SAS procedures to perform longitudinal analysis or multilevel models in epidemiology. Model selection is one of the fundamental questions in model building. One of the most popular and widely used strategies is model selection based on information criteria, such as Akaike Information Criterion (AIC) and Sawa Bayesian Information Criterion (BIC). This strategy considers both fit and complexity, and enables multiple models to be compared simultaneously. However, there is no existing SAS procedure to perform model selection automatically based on information criteria for PROC MIXED, given a set of covariates. This paper provides information about using the SAS %ic_mixed macro to select a final model with the smallest value of AIC and BIC. Specifically, the %ic_mixed macro will do the following: 1) produce a complete list of all possible model specifications given a set of covariates, 2) use do loop to read in one model specification every time and save it in a macro variable, 3) execute PROC MIXED and use the Output Delivery System (ODS) to output AICs and BICs, 4) append all outputs and use the DATA step to create a sorted list of information criteria with model specifications, and 5) run PROC REPORT to produce the final summary table. Based on the sorted list of information criteria, researchers can easily identify the best model. This paper includes the macro programming language, as well as examples of the macro calls and outputs.
Qinlei Huang, St Jude Children's Research Hospital
With cloud service providers such as Amazon commodifying the process to create a server instance based on desirable OS and sizing requirements for a SAS® implementation, a definite advantage is the speed and simplicity of getting started with a SAS installation. Planning horizons are nonexistent, and initial financial outlay is economized because no server hardware procurement occurs, no data center space reserved, nor any hardware/OS engineers assigned to participate in the initial server instance creation. The cloud infrastructure seems to make the OS irrelevant, an afterthought, and even just an extension of SAS software. In addition, if the initial sizing, memory allocation, or disk space selection results later in some deficiency or errors in SAS processing, the flexibility of the virtual server instance allows the instance image to be saved and restored to a new, larger, or performance-enhanced instance at relatively low cost and minor inconvenience to production users. Once logged on with an authenticated ID, with Internet connectivity established, a SAS installer ID created, and a web browser started, it's just a matter of downloading the SAS® Download Manager to begin the creation of the SAS software depot. Many Amazon cloud instances have download speeds that tend to be greater and processing time that is shorter to create the depot. Installing SAS via the SAS® Deployment Wizard is not dissimilar on a cloud instance versus a server instance, and all the same challenges (for example, SSL, authentication and single sign-on, and repository migration) apply. Overall, SAS administrators have an optimal, straightforward, and low-cost opportunity to deploy additional SAS instances running different versions or more complex configurations (for example, SAS® Grid Computing, resource-based load balancing, and SAS jobs split and run parallel across multiple nodes). While the main advantages of using a cloud instance to deploy a new SAS i
mplementation tend to revolve around efficiency, speed, and affordability, its pitfalls have to do with vulnerability to intrusion and external attack. The same easy, low-cost server instance launch also has a negative flip side that includes a possible lack of experienced OS oversight and basic security precaution. At the moment, Linux administrators around the country are patching their physical and virtual systems to prevent the spread of the Shellshock vulnerability for web servers that originated in cloud instances. Cloud instances have also been targeted and credentials compromised which, in some cases, have allowed thousands of new instances to be spun up and billed to an unsuspecting AWS licensed user. Extra steps have to be taken to prevent the aforementioned attacks and fortunately, there are cloud-based methods available. By creating a Virtual Private Cloud (VPC) instance, AWS users can restrict access by originating IP addresses while also requiring additional administration, including creating entries for application ports that require external access. Moreover, with each step toward more secure cloud implementations, there are additional complexities that arise, including making additional changes or compromises with corporate firewall policy and user authentication methods.
Jeff Lehmann, Slalom Consulting
Many languages, including the SAS DATA step, have extensive debuggers that can be used to detect logic errors in programs. Another easy way to detect logic errors is to simply display messages and variable content at strategic times. The PUTLOG statement will be discussed and examples given that show how using this statement is probably the easiest and most flexible way to detect and correct errors in your DATA step logic.
Steven First, Systems Seminar Consultants
Tracking responses is one of the most important aspects of the campaign life cycle for a marketing analyst; yet this is often a daunting task. This paper provides guidance for how to determine what is a response, how it is defined for your business, and how you collect data to support it. It provides guidance in the context of SAS® Marketing Automation and beyond.
Pamela Dixon, SAS
In this session, I discuss an overall approach to governing Big Data. I begin with an introduction to Big Data governance and the governance framework. Then I address the disciplines of Big Data governance: data ownership, metadata, privacy, data quality, and master and reference data management. Finally, I discuss the reference architecture of Big Data, and how SAS® tools can address Big Data governance.
Sunil Soares, Information Asset
The first thing that you need to know is that SAS® software stores dates and times as numbers. However, this is not the only thing that you need to know, and this presentation gives you a solid base for working with dates and times in SAS. It also introduces you to functions and features that enable you to manipulate your dates and times with surprising flexibility. This paper also shows you some of the possible pitfalls with dates (and with times and datetimes) in your SAS code, and how to avoid them. We show you how SAS handles dates and times through examples, including the ISO 8601 formats and informats, and how to use dates and times in TITLE and FOOTNOTE statements. We close with a brief discussion of Microsoft Excel conversions.
Derek Morgan
Many industries are challenged with requirements to protect information and limit its access. In this paper, we will discuss various approaches for row-level access to LASR tables and demonstrate our implementation. Methods discussed in this paper include security joins in data queries, using star schema with security table as one dimension, permission conditions based on metadata stored user information, and user IDs being associated with data as a dedicated column. The paper then identifies shortcomings and strengths of various approaches as well as our iterations to satisfy business needs that led us to our row-level permissions implementation. In addition, the paper offers recommendations and other considerations to keep in mind while working on row-level persmissions with LASR tables.
Emre Saricicek, University of North Carolina at Chapel Hill
Dean Huff, UNC
The SAS® LASR™ Analytic Server acts as a back-end, in-memory analytics engine for solutions such as SAS® Visual Analytics and SAS® Visual Statistics. It is designed to exist in a massively scalable, distributed environment, often alongside Hadoop. This paper guides you through the impacts of the architecture decisions shared by both software applications and what they specifically mean for SAS®. We then present positive actions you can take to rebound from unexpected outages and resume efficient operations.
Rob Collum, SAS
This unique culture has access to lots of data, unstructured and structured; is innovative, experimental, groundbreaking, and doesn't follow convention; and has access to powerful new infrastructure technologies and scalable, industry-standard computing power like never seen before. The convergence of data, and innovative spirit, and the means to process it is what makes this a truly unique culture. In response to that, SAS® proposes The New Analytics Experience. Attend this session to hear more about the New Analytics Experience and the latest Intel technologies that make it possible.
Mark Pallone, Intel
It is well-known in the world of SAS® programming that the REPORT procedure is one of the best procedures for creating dynamic reports. However, you might not realize that the compute block is where all of the action takes place! Its flexibility enables you to customize your output. This paper is a primer for using a compute block. With a compute block, you can easily change values in your output with the proper assignment statement and add text with the LINE statement. With the CALL DEFINE statement, you can adjust style attributes such as color and formatting. Through examples, you learn how to apply these techniques for use with any style of output. Understanding how to use the compute-block functionality empowers you to move from creating a simple report to creating one that is more complex and informative, yet still easy to use.
Jane Eslinger, SAS
If you are one of the many customers who want to move your SAS® data to Hadoop, one decision you will encounter is what data storage format to use. There are many choices, and all have their pros and cons. One factor to consider is how you currently store your data. If you currently use the Base SAS® engine or the SAS® Scalable Performance Data Engine, then using the SPD Engine with Hadoop will enable you to continue accessing your data with as little change to your existing SAS programs as possible. This paper discusses the enhancements, usage, and benefits of the SPD Engine with Hadoop.
Lisa Brown, SAS
Building a holistic view of the customer is becoming the norm across industries. The financial services industry and retail firms have been at the forefront of striving for this goal. Firm ABC is a large insurance firm based in the United States. It uses multiple campaign management platforms across different lines of business. Marketing campaigns are deployed in isolation. Similarly, responses are tracked and attributed in silos. This prevents the firm from obtaining a holistic view of its customers across products and lines of business and leads to gaps and inefficiencies in data management, campaign management, reporting, and analytics. Firm ABC needed an enterprise-level solution that addressed how to integrate with different types of data sources (both external and internal) and grow as a scalable and agile marketing and analytics organization; how to deploy campaign and contact management using a centralized platform to reduce overlap and redundancies and deliver a more coordinated marketing messaging to customers; how to perform more accurate attribution that, in turn, drives marketing measurement and planning; how to implement more sophisticated and visual self-service reporting that enables business users to make marketing decisions; and how to build advanced analytics expertise in-house. The solution needed to support predictive modeling, segmentation, and targeting. Based on these challenges and requirements, the firm conducted an extensive RFP process and reviewed various vendors in the enterprise marketing and business intelligence space. Ultimately, SAS® Customer Intelligence and SAS® Enterprise BI were selected to help the firm achieve its goals and transition to a customer-centric organization. The ability for SAS® to deliver a custom-hosted solution was one of the key drivers for this decision, along with its experience in the financial services and insurance industries. Moreover, SAS can provide the much-needed flexibility and scala
bility, whether it is around integrating external vendors, credit data, and mail-shop processing, or managing sensitive customer information. This presentation provides detailed insight on the various modules being implemented by the firm, how they will be leveraged to address the goals, and what their roles are in the future architecture. The presentation includes detailed project implementation and provides insights, best practices, and challenges faced during the project planning, solution design, governance, and development and production phases. The project team included marketers, campaign managers, data analysts, business analysts, and developers with sponsorship and participation from the C suite. The SAS® Transformation Project provides insights and best practices that prove useful for business users, IT teams, and senior management. The scale, timing, and complexity of the solution deployment make it an interesting and relevant case study, not only for financial clients, but also for any large firm that has been tasked with understanding its customers and building a holistic customer profile.
Ronak Shah, Slalom Consulting
Minza Zahid, Slalom Consulting
The bookBot Identity: January 2013. With no memory of it from the past, students and faculty at NC State awake to find the Hunt Library just opened, and inside it, the mysterious and powerful bookBot. A true physical search engine, the bookBot, without thinking, relentlessly pursues, captures, and delivers to the patron any requested book (those things with paper pages--remember?) from the Hunt Library. The bookBot Supremacy: Some books were moved from the central campus library to the new Hunt Library. Did this decrease overall campus circulation or did the Hunt Library and its bookBot reign supreme in increasing circulation? The bookBot Ultimatum: To find out if the opening of the Hunt Library decreased or increased overall circulation. To address the bookBot Ultimatum, the Circulation Statistics Investigation (CSI) team uses the power of SAS® analytics to model library circulation before and after the opening of the Hunt Library. The bookBot Legacy: Join us for the adventure-filled story. Filled with excitement and mystery, this talk is bound to draw a much bigger crowd than had it been more honestly titled Intervention Analysis for Library Data. Tools used are PROC ARIMA, PROC REG, and PROC SGPLOT.
David Dickey, NC State University
John Vickery, North Carolina State University
SAS® Visual Analytics is a product that easily enables the interactive analysis of data. It offers capabilities for analyzing data using a visual approach. This paper discusses architecture options for configuring a SAS Visual Analytics installation that serves multiple customers in parallel. The overall objective is to create an environment that scales with the volume of data and also with the number of customer groups. This paper explains several concepts for serving multiple customers groups and explains the pros and cons of each approach.
Jan Bigalke, Allianz Managed Operations and Services SE
SAS® Visual Analytics is very responsive in analyzing historical data, and it takes advantage of in-memory data. Data query, exploration, and reports form the basis of the tool, which also has other forward-looking techniques such as star schemas and stored processes. A security model is established by defining the permissions through a web-based application that is stored in a database table. That table is brought to the SAS Visual Analytics environment as a LASR table. Typically, security is established based on the departmental access, geographic region, or other business-defined groups. This permission table is joined with the underlying base table. Security is defined by a data filter expression through a conditional grant using SAS® metadata identities. The in-memory LASR star schema is very similar to a typical star schema. A single fact table that is surrounded by dimension tables is used to create the star schema. The star schema gives you the advantage of loading data quickly on the fly. Each of the dimension tables is joined to the fact table with a dimension key. A SAS application that gives the flexibility and the power of coding is created as a stored process that can be executed as requested by client applications such as SAS Visual Analytics. Input data sources for stored processes can be either LASR tables in the SAS® LASR™ Analytic Server or any other data that can be reached through the stored process code logic.
Arun Sugumar, Kavi Associates
Vimal Raj Arockiasamy, Kavi Associates
A maximum harvest in farming analytics is achieved only if analytics can also be operationalized at the level of core business applications. Mapped to the use of SAS® Analytics, the fruits of SAS be shared with Enterprise Business Applications by SAP. Learn how your SAS environment, including the latest of SAS® In-Memory Analytics, can be integrated with SAP applications based on the SAP In-Memory Platform SAP HANA. We'll explore how a SAS® Predictive Modeling environment can be embedded inside SAP HANA and how native SAP HANA data management capabilities such as SAP HANA Views, Smart Data Access, and more can be leveraged by SAS applications and contribute to an end-to-end in-memory data management and analytics platform. Come and see how you can extend the reach of your SAS® Analytics efforts with the SAP HANA integration!
Morgen Christoph, SAP SE
So you have big data and need to know how to quickly and efficiently keep your data up-to-date and available in SAS® Visual Analytics? One of the challenges that customers often face is how to regularly update data tables in the SAS® LASR™ Analytic Server, the in-memory analytical platform for SAS Visual Analytics. Is appending data always the right answer? What are some of the key things to consider when automating a data update and load process? Based on proven best practices and existing customer implementations, this paper provides you with answers to those questions and more, enabling you to optimize your update and data load processes. This ensures that your organization develops an effective and robust data refresh strategy.
Kerri L. Rivers, SAS
Christopher Redpath, SAS
There are many 'gotcha's' when you are trying to automate a well-written program. The details differ depending on the way you schedule the program and the environment you are using. This paper covers system options, error handling logic, and best practices for logging. Save time and frustration by using these tips as you schedule programs to run.
Adam Hood, Slalom Consulting
The Work library is at the core of most SAS® programs, but programmers tend to ignore it unless something breaks. This paper first discusses the USER= system option for saving the Work files in a directory. Then, we cover a similar macro-controlled method for saving the files in your Work library, and the interaction of this method with OPTIONS NOREPLACE and the syntax check options. A number of SAS system options that help you to manage Work libraries are discussed: WORK=, WORKINIT, WORKTERM; these options might be restricted by SAS Administrators. Additional considerations in managing Work libraries are discussed: handling large files, file compression, programming style, and macro-controlled deletion of redundant files in SAS® Enterprise Guide®.
Thomas Billings, MUFG Union Bank, N.A.
Avinash Kalwani, Oklahoma State University
The SAS® macro processor is a powerful ally, but it requires respect. There are a myriad of macro functions available, most of which emulate DATA step functions, but some of which require special consideration to fully use their capabilities. Questions to be answered include the following: When should you protect the macro variable? During macro compilation, during macro execution? (What do those phrases even mean?) How do you know when to use which macro function? %BQUOTE(), %NBRQUOTE(), %UNQUOTE(),%SUPERQ(), and so on? What's with the %Q prefix of some macro functions? And more: %SYSFUNC(), %SYSCALL, and so on. Macro developers will no longer by daunted by the complexity of choices. With a little clarification, the power of these macro functions will open up new possibilities.
Andrew Howell, ANJ Solutions
Using PROC TRANSPOSE to make wide files wider requires running separate PROC TRANSPOSE steps for each variable that you want transposed, as well as a DATA step using a MERGE statement to combine all of the transposed files. In addition, if you want the variables in a specific order, an extra DATA step is needed to rearrange the variable ordering. This paper presents a method that accomplishes the task in a simpler manner using less code and requiring fewer steps, and which runs n times faster than PROC TRANSPOSE (where n=the number of variables to be transposed).
Keshan Xia, 3GOLDEN Beijing Technologies Co. Ltd., Beijing, China
Matthew Kastin, I-Behavior
Arthur Tabachneck, AnalystFinder, Inc.
Currently, there are several methods for reading JSON formatted files into SAS® that depend on the version of SAS and which products are licensed. These methods include user-defined macros, visual analytics, PROC GROOVY, and more. The user-defined macro %GrabTweet, in particular, provides a simple way to directly read JSON-formatted tweets into SAS® 9.3. The main limitation of %GrabTweet is that it requires the user to repeatedly run the macro in order to download large amounts of data over time. Manually downloading tweets while conforming to the Twitter rate limits might cause missing observations and is time-consuming overall. Imagine having to sit by your computer the entire day to continuously grab data every 15 minutes, just to download a complete data set of tweets for a popular event. Fortunately, the %GrabTweet macro can be modified to automate the retrieval of Twitter data based on the rate that the tweets are coming in. This paper describes the application of the %GrabTweet macro combined with batch processing to download tweets without manual intervention. Users can specify the phrase parameters they want, run the batch processing macro, leave their computer to automatically download tweets overnight, and return to a complete data set of recent Twitter activity. The batch processing implements an automated retrieval of tweets through an algorithm that assesses the rate of tweets for the specified topic in order to make downloading large amounts of data simpler and effortless for the user.
Isabel Litton, California Polytechnic State University, SLO
Rebecca Ottesen, City of Hope and Cal Poly SLO
If you are looking for ways to make your graphs more communication-effective, this tutorial can help. It covers both the new ODS Graphics SG (Statistical Graphics) procedures and the traditional SAS/GRAPH® software G procedures. The focus is on management reporting and presentation graphs, but the principles are relevant for statistical graphs as well. Important features unique to SAS® 9.4 are included, but most of the designs and construction methods apply to earlier versions as well. The principles of good graphic design are actually independent of your choice of software.
LeRoy Bessler, Bessler Consulting and Research
This paper explores feature extraction from unstructured text variables using Term Frequency-Inverse Document Frequency (TF-IDF) weighting algorithms coded in Base SAS®. Data sets with unstructured text variables can often hold a lot of potential to enable better predictive analysis and document clustering. Each of these unstructured text variables can be used as inputs to build an enriched data set-specific inverted index, and the most significant terms from this index can be used as single word queries to weight the importance of the term to each document from the corpus. This paper also explores the usage of hash objects to build the inverted indices from the unstructured text variables. We find that hash objects provide a considerable increase in algorithm efficiency, and our experiments show that a novel weighting algorithm proposed by Paik (2013) best enables meaningful feature extraction. Our TF-IDF implementations are tested against a publicly available data breach data set to understand patterns specific to insider threats to an organization.
Ila Gokarn, Singapore Management University
Clifton Phua, SAS
We've all heard it before: 'If two ampersands don't work, add a third.' But how many of us really know how ampersands work behind the scenes? We show the function of multiple ampersands by going through examples of the common two- and three-ampersand scenarios, and expand to show four, five, six, and even seven ampersands, and explain when they might be (rarely) useful.
Joe Matise, NORC at the University of Chicago
Interactive Voice Response (IVR) systems are likely one of the best and worst gifts to the world of communication, depending on who you ask. Businesses love IVR systems because they take out hundreds of millions of dollars of call center costs in automation of routine tasks, while consumers hate IVRs because they want to talk to an agent! It is a delicate balancing act to manage an IVR system that saves money for the business, yet is smart enough to minimize consumer abrasion by knowing who they are, why they are calling, and providing an easy automated solution or a quick route to an agent. There are many aspects to designing such IVR systems, including engineering, application development, omni-channel integration, user interface design, and data analytics. For larger call volume businesses, IVRs generate terabytes of data per year, with hundreds of millions of rows per day that track all system and customer- facing events. The data is stored in various formats and is often unstructured (lengthy character fields that store API return information or text fields containing consumer utterances). The focus of this talk is the development of a data mining framework based on SAS® that is used to parse and analyze IVR data in order to provide insights into usability of the application across various customer segments. Certain use cases are also provided.
Dmitriy Khots, West Corp
There have been many SAS® Global Forum papers written about getting your data into SAS® from Microsoft Excel and getting your data back out to Excel after using SAS to manipulate it. But sometimes you have to update Excel files with special formatting and formulas that would be too hard to replicate with Dynamic Data Exchange (DDE) or that change too often to make DDE worthwhile. But we can still use the output prowess of SAS and a sprinkling of Visual Basic for Applications (VBA) to maintain the existing formatting and formulas in your Excel file! This paper focuses on the possibilities you have in updating Excel files by reading in the data, using SAS to modify it as needed, and then using DDE and a simple Excel VBA macro to output back to Excel and use a formula while maintaining the existing formatting that is present in your source Excel file.
Brian Wrobel, Pearson
In 2012, the Obama campaign used advanced analytics to target voters, especially in social media channels. Millions of voters were scored on models each night to predict their voting patterns. These models were used as the driver for all campaign decisions, including TV ads, budgeting, canvassing, and digital strategies. This presentation covers how the Obama campaign strategies worked, what's in store for analytics in future elections, and how these strategies can be applied in the business world.
Peter Tanner, Capital One
Becoming one of the best memorizers in the world doesn't happen overnight. With hard work, dedication, a bit of obsession, and with the assistance of some clever analytics metrics, Nelson Dellis was able to climb himself up to the top of the memory rankings in under a year to become the now 3x USA Memory Champion. In this talk, he explains what it takes to become the best at memory, what is involved in such grueling memory competitions, and how analytics helped him get there.
Nelson Dellis, Climb for Memory
Categorization hierarchies are ubiquitous in big data. Examples include the MEDLINE's Medical Subject Headings (MeSH) taxonomy, United Nations Standard Products and Services Code (UNSPSC) product codes, and the Medical Dictionary for Regulatory Activities (MedDRA) hierarchy for adverse reaction coding. A key issue is that in most taxonomies the probability of any particular example being in a category is very small at lower levels of the hierarchy. Blindly applying a standard categorization model is likely to perform poorly if this fact is not taken into consideration. This paper introduce a novel technique for text categorization, Boolean rule extraction, which enables you to effectively address this situation. In addition, models that are generated by a rule-based technique have good interpretability and can be easily modified by a human expert, enabling better human-machine interaction. The paper demonstrates how to use SAS® Text Miner macros and procedures to obtain effective predictive models at all hierarchy levels in a taxonomy.
Zheng Zhao, SAS
Russ Albright, SAS
James Cox, SAS
Ning Jin, SAS
Real-time web content personalization has come into its teen years, but recently a spate of marketing solutions have enabled marketers to finely personalize web content for visitors based on browsing behavior, geo-location, preferences, and so on. In an age where the attention span of a web visitor is measured in seconds, marketers hope that tailoring the digital experience will pique each visitor's interest just long enough to increase corporate sales. The range of solutions spans the entire spectrum of completely cloud-based installations to completely on-premises installations. Marketers struggle to find the most optimal solution that would meet their corporation's marketing objectives, provide them the highest agility and time-to-market, and still keep a low marketing budget. In the last decade or so, marketing strategies that involved personalizing using purely on-premises customer data quickly got replaced by ones that involved personalizing using only web-browsing behavior (a.k.a, clickstream data). This was possible because of a spate of cloud-based solutions that enabled marketers to de-couple themselves from the underlying IT infrastructure and the storage issues of capturing large volumes of data. However, this new trend meant that corporations weren't using much of their treasure trove of on-premises customer data. Of late, however, enterprises have been trying hard to find solutions that give them the best of both--the ease of gathering clickstream data using cloud-based applications and on-premises customer data--to perform analytics that lead to better web content personalization for a visitor. This paper explains a process that attempts to address this rapidly evolving need. The paper assumes that the enterprise already has tools for capturing clickstream data, developing analytical models, and for presenting the content. It provides a roadmap to implementing a phased approach where enterprises continue to capture clickstream data, but they bring that data in-house to be merg
ed with customer data to enable their analytics team to build sophisticated predictive models that can be deployed into the real-time web-personalization application. The final phase requires enterprises to continuously improve their predictive models on a periodic basis.
Mahesh Subramanian, SAS Institute Inc.
Suneel Grover, SAS
Hawkins (1980) defines an outlier as an observation that deviates so much from other observations as to arouse the suspicion that it was generated by a different mechanism . To identify data outliers, a classic multivariate outlier detection approach implements the Robust Mahalanobis Distance Method by splitting the distribution of distance values into two subsets (within-the-norm and out-of-the-norm), with the threshold value usually set to the 97.5% quantile of the Chi-Square distribution with p (number of variables) degrees of freedom and items whose distance values are beyond it are labeled out-of-the-norm. This threshold value is an arbitrary number, however, and it might flag as out-of-the-norm a number of items that are actually extreme values of the baseline distribution rather than outliers. Therefore, it is desirable to identify an additional threshold, a cutoff point that divides the set of out-of-norm points in two subsets--extreme values and outliers. One way to do this--in particular for larger databases--is to Increase the threshold value to another arbitrary number, but this approach requires taking into consideration the size of the data set since size affects the threshold-separating outliers from extreme values. A 2003 article by Gervini (Journal of Multivariate Statistics) proposes an adaptive threshold that increases with the number of items n if the data is clean but it remains bounded if there are outliers in the data. In 2005 Filzmoser, Garrett, and Reimann (Computers & Geosciences) built on Gervini's contribution to derive by simulation a relationship between the number of items n, the number of variables in the data p, and a critical ancillary variable for the determination of outlier thresholds. This paper implements the Gervini adaptive threshold value estimator by using PROC ROBUSTREG and the SAS® Chi-Square functions CINV and PROBCHI, available in the SAS/STAT® environment. It also provides data simulations to illustrate the reliab
ility and the flexibility of the method in distinguishing true outliers from extreme values.
Paulo Macedo, Integrity Management Services, LLC
Companies are increasingly relying on analytics as the right solution to their problems. In order to use analytics and create value for the business, companies first need to store, transform, and structure the data to make it available and functional. This paper shows a successful business case where the extraction and transformation of the data combined with analytical solutions were developed to automate and optimize the management of the collections cycle for a TELCO company (DIRECTV Colombia). SAS® Data Integration Studio is used to extract, process, and store information from a diverse set of sources. SAS Information Map is used to integrate and structure the created databases. SAS® Enterprise Guide® and SAS® Enterprise Miner™ are used to analyze the data, find patterns, create profiles of clients, and develop churn predictive models. SAS® Customer Intelligence Studio is the platform on which the collection campaigns are created, tested, and executed. SAS® Web Report Studio is used to create a set of operational and management reports.
Darwin Amezquita, DIRECTV
Paulo Fuentes, Directv Colombia
Andres Felipe Gonzalez, Directv
Crowd sourcing of data is growing rapidly, enabled by smart devices equipped with assisted GPS location, tagging of photos, and mapping other aspects of the users' lives and activities. A fundamental assumption that the reported locations are accurate within the usual GPS limitations of approximately 10m is made when such data is used. However, as a result of a wide range of technical issues, it turns out that the accuracy of the reported locations is highly variable and cannot be relied on; some locations are accurate but many are highly inaccurate, and that can affect many of the decisions that are being made based on the data. An analysis of a set of data is presented that demonstrates that this assumption is flawed, and examples of the levels of inaccuracy that has significant consequences in a range of contexts are provided. By using Base SAS®, the paper demonstrates the quality and veracity of the data and the scale of the errors that can be present. This analysis has critical significance in fields such as mobile location-based marketing, forensics, and law.
Richard Self, University of Derby
Throughout the latter part of the twentieth century, the United States of America has experienced an incredible boom in the rate of incarceration of its citizens. This increase arguably began in the 1970s when the Nixon administration oversaw the beginning of the war on drugs in America. The U.S. now has one of the highest rates of incarceration among industrialized nations. However, the citizens who have been incarcerated on drug charges have disproportionately been African American or other racial minorities, even though many studies have concluded that drug use is fairly equal among racial groups. In order to remedy this situation, it is essential to first understand why so many more people have been arrested and incarcerated. In this research, I explore a potential explanation for the epidemic of mass incarceration. I intend to answer the question does gubernatorial rhetoric have an effect on the rate of incarceration in a state? More specifically, I am interested in examining the language that the governor of a state uses at the annual State of the State address in order to see if there is any correlation between rhetoric and the subsequent rate of incarceration in that state. In order to understand any possible correlation, I use SAS® Text Miner and SAS® Contextual Analysis to examine the attitude towards crime in each speech. The political phenomenon that I am trying to understand is how state government employees are affected by the tone that the chief executive of a state uses towards crime, and whether the actions of these state employees subsequently lead to higher rates of incarceration. The governor is the top government official in charge of employees of a state, so when this official addresses the state, the employees may take the governor's message as an order for how they do their jobs. While many political factors can affect legislation and its enforcement, a governor has the ability to set the tone of a state when it comes to policy issues suc
h as crime.
Catherine Lachapelle, UNC Chapel Hill
SAS® platform administrators always feel the pinch of not having information about how much storage space is occupied by each user on one specific file system or in the entire environment. Sometimes the platform administrator does not have access to all users' folders, so they have to plan for the worst. There are multiple approaches to tackle this problem. One of the better methods is to initiate an alert mechanism to notify a user when they are in the top 10 file system users on the system.
Venkateswarlu Toluchuri, United Health Group - OPTUM
In this era of bigdata, the use of text analytics to discover insights is rapidly gainingpopularity in businesses. On average, more than 80 percent of the data inenterprises may be unstructured. Text analytics can help discover key insightsand extract useful topics and terms from the unstructured data. The objectiveof this paper is to build a model using textual data that predicts the factorsthat contribute to downtime of a truck. This research analyzes the data of over200,000 repair tickets of a leading truck manufacturing company. After theterms were grouped into fifteen key topics using text topic node of SAS® TextMiner, a regression model was built using these topics to predict truckdowntime, the target variable. Data was split into training and validation fordeveloping the predictive models. Knowledge of the factors contributing todowntime and their associations helped the organization to streamline theirrepair process and improve customer satisfaction.
Ayush Priyadarshi, Oklahoma State University
Goutam Chakraborty, Oklahoma State University
The concept of least squares means, or population marginal means, seems to confuse a lot of people. We explore least squares means as implemented by the LSMEANS statement in SAS®, beginning with the basics. Particular emphasis is paid to the effect of alternative parameterizations (for example, whether binary variables are in the CLASS statement) and the effect of the OBSMARGINS option. We use examples to show how to mimic LSMEANS using ESTIMATE statements and the advantages of the relatively new LSMESTIMATE statement. The basics of estimability are discussed, including how to get around the dreaded non-estimable messages. Emphasis is put on using the STORE statement and PROC PLM to test hypotheses without having to redo all the model calculations. This material is appropriate for all levels of SAS experience, but some familiarity with linear models is assumed.
David Pasta, ICON Clinical Research
Mathematical optimization is a powerful paradigm for modeling and solving business problems that involve interrelated decisions about resource allocation, pricing, routing, scheduling, and similar issues. The OPTMODEL procedure in SAS/OR® software provides unified access to a wide range of optimization solvers and supports both standard and customized optimization algorithms. This paper illustrates PROC OPTMODEL's power and versatility in building and solving optimization models and describes the significant improvements that result from PROC OPTMODEL's many new features. Highlights include the recently added support for the network solver, the constraint programming solver, and the COFOR statement, which allows parallel execution of independent solver calls. Best practices for solving complex problems that require access to more than one solver are also demonstrated.
Rob Pratt, SAS
Polytomous items have been widely used in educational and psychological settings. As a result, the demand for statistical programs that estimate the parameters of polytomous items has been increasing. For this purpose, Samejima (1969) proposed the graded response model (GRM), in which category characteristic curves are characterized by the difference of the two adjacent boundary characteristic curves. In this paper, we show how the SAS-PIRT macro (a SAS® macro written in SAS/IML®) was developed based on the GRM and how it performs in recovering the parameters of polytomous items using simulated data.
Sung-Hyuck Lee, ACT, Inc.
Many papers have been written over the years that describe how to use Dynamic Data Exchange (DDE) to pass data from SAS® to Excel. This presentation aims to show you how to do the same exchange with the SAS Output Delivery System (ODS) and the TEMPLATE Procedure.
Peter Timusk, Statistics Canada
How do you serve 25 million video ads a day to Internet users in 25 countries, while ensuring that you target the right ads to the right people on the right websites at the right time? With a lot of help from math, that's how! Come hear how Videology, an Internet advertising company, combines mathematical programming, predictive modeling, and big data techniques to meet the expectations of advertisers and online publishers, while respecting the privacy of online users and combatting fraudulent Internet traffic.
Kaushik Sinha, Videology
Network diagrams in SAS® Visual Analytics help highlight relationships in complex data by enabling users to visually correlate entire populations of values based on how they relate to one another. Network diagrams are appealing because they enable an analyst to visualize large volumes and relationships of data and to assign multiple roles to represent key factors for analysis such as node size and color and linkage size and color. SAS Visual Analytics can overlay a network diagram on top of a spatial geographic map for an even more appealing visualization. This paper focuses specifically on how to prepare data for network diagrams and how to build network diagrams in SAS Visual Analytics. This paper provides two real-world examples illustrating how to visualize users and groups from SAS® metadata and how banks can visualize transaction flow using network diagrams.
Stephen Overton, Zencos Consulting
Benjamin Zenick, Zencos
Whether you have a few variables to compare or billions of rows of data to explore, seeing the data in visual format can make all the difference in the insights you glean. In this session, learn how to determine which data is best delivered through visualization, understand the myriad types of data visualizations for use with your big data, and create effective data visualizations. If you are new to data visualization, this talk will help you understand how to best communicate with your data.
Tricia Aanderud, Zencos
When you are analyzing your data and building your models, you often find out that the data cannot be used in the intended way. Systematic pattern, incomplete data, and inconsistencies from a business point of view are often the reason. You wish you could get a complete picture of the quality status of your data much earlier in the analytic lifecycle. SAS® analytics tools like SAS® Visual Analytics help you to profile and visualize the quality status of your data in an easy and powerful way. In this session, you learn advanced methods for analytic data quality profiling. You will see case studies based on real-life data, where we look at time series data from a bird's-eye-view and interactively profile GPS trackpoint data from a sail race.
Gerhard Svolba, SAS
If you have not had a chance to explore SAS® Studio yet, or if you're anxious to see what's new, this paper gives you an introduction to this new browser-based interface for SAS® programmers and a peek at what's coming. With SAS Studio, you can access your data files, libraries, and existing programs, and you can write new programs while using SAS software behind the scenes. SAS Studio connects to a SAS server in order to process SAS programs. The SAS server can be a hosted server in a cloud environment, a server in your local environment, or a copy of SAS on your local machine. It's a comfortable environment for those used to the traditional SAS windowing environment (SAS® Display Manager) but new features like a query window, process flow diagrams, and tasks have been added to appeal to traditional SAS® Enterprise Guide® users.
Mike Porter, SAS
Michael Monaco, SAS
Amy Peters, SAS
The latest releases of SAS® Data Integration Studio and DataFlux® Data Management Platform provide an integrated environment for managing and transforming your data to meet new and increasingly complex data management challenges. The enhancements help develop efficient processes that can clean, standardize, transform, master, and manage your data. The latest features include capabilities for building complex job processes, new web-based development and job monitoring environments, enhanced ELT transformation capabilities, big data transformation capabilities for Hadoop, integration with the analytic platform provided by SAS® LASR™ Analytic Server, enhanced features for lineage tracing and impact analysis, and new features for master data and metadata management. This paper provides an overview of the latest features of the products and includes use cases and examples for leveraging product capabilities.
Nancy Rausch, SAS
Mike Frost, SAS
Now that SAS® users are moving to 64-bit Microsoft Windows platforms, some are discovering that vendor-supplied DLLs might still be 32-bit. Since 64-bit applications cannot use 32-bit DLLs, this would present serious technical issues. This paper explains how the MODULE routines in SAS can be used to call into 32-bit DLLs successfully, using new features added in SAS® 9.3.
Rick Langston, SAS
In many situations, an outcome of interest has a large number of zero outcomes and a group of nonzero outcomes that are discrete or highly skewed. For example, in modeling health care costs, some patients have zero costs, and the distribution of positive costs are often extremely right-skewed. When modeling charitable donations, many potential donors give nothing, and the majority of donations are relatively small with a few very large donors. In the analysis of count data, there are also times where there are more zeros than would be expected using standard methodology, or cases where the zeros might differ substantially than the non-zeros, such as number of cavities a patient has at a dentist appointment or number of children born to a mother. If data has such structure, and ordinary least squares methods are used, then predictions and estimation might be inaccurate. The two-part model gives us a flexible and useful modeling framework in many situations. Methods for fitting the models with SAS® software are illustrated.
Laura Kapitula, Grand Valley State University
A familiar adage in firefighting--if you can predict it, you can prevent it--rings true in many circles of accident prevention, including software development. If you can predict that a fire, however unlikely, someday might rage through a structure, it's prudent to install smoke detectors to facilitate its rapid discovery. Moreover, the combination of smoke detectors, fire alarms, sprinklers, fire-retardant building materials, and rapid intervention might not prevent a fire from starting, but it can prevent the fire from spreading and facilitate its immediate and sometimes automatic extinguishment. Thus, as fire codes have grown to incorporate increasingly more restrictions and regulations, and as fire suppression gear, tools, and tactics have continued to advance, even the harrowing business of firefighting has become more reliable, efficient, and predictable. As operational SAS® data processes mature over time, they too should evolve to detect, respond to, and overcome dynamic environmental challenges. Erroneous data, invalid user input, disparate operating systems, network failures, memory errors, and other challenges can surprise users and cripple critical infrastructure. Exception handling describes both the identification of and response to adverse, unexpected, or untimely events that can cause process or program failure, as well as anticipated events or environmental attributes that must be handled dynamically through prescribed, predetermined channels. Rapid suppression and automatic return to functioning is the hopeful end state but, when catastrophic events do occur, exception handling routines can terminate a process or program gracefully while providing meaningful execution and environmental metrics to developers both for remediation and future model refinement. This presentation introduces fault-tolerant Base SAS® exception handling routines that facilitate robust, reliable, and responsible software design.
Troy Hughes, Datmesis Analytics
We demonstrate a method of using SAS® 9.4 to supplement the interpretation of dimensions of a Multidimensional Scaling (MDS) model, a process that could be difficult without SAS®. In our paper, we examine why do people choose to drive to work (over other means of travel),' a question that transportation researchers need to answer in order to encourage drivers to switch to more environmentally-friendly travel modes. We applied the MDS approach on a travel survey data set because MDS has the advantage of extracting drivers' motivations in multiple dimensions.To overcome the challenges of dimension interpretation with MDS, we used the logistic regression function of SAS 9.4 to identify the variables that are strongly associated with each dimension, thus greatly aiding our interpretation procedure. Our findings are important to transportation researchers, practitioners, and MDS users.
Jun Neoh, University of Southampton
Jun Neoh, University of Southampton
The experiences of the programmer role in a large SAS® shop are shared. Shortages in SAS programming talent tend to result in one SAS programmer doing all of the production programming within a unit in a shop. In a real-world example, management realized the problem and brought in new programmers to help do the work. The new programmers actually improved the existing programmers' programs. It became easier for the experienced programmers to complete other programming assignments within the unit. And, the different programs in the shop had a standard structure. As a result, all of the programmers had a clearer picture of the work involved and knowledge hoarding was eliminated. Experienced programmers were now available when great SAS code needed to be written. Yet, they were not the only programmers who could do the work! With multiple programmers able to do the same tasks, vacations were possible and didn't threaten deadlines. It was even possible for these programmers to be assigned other tasks outside of the unit and broaden their own skills in statistical production work.
Peter Timusk, Statistics Canada
Smoke detectors operate by comparing actual air quality to expected air quality standards and immediately alerting occupants when smoke or particle levels exceed established thresholds. Just as rapid identification of smoke (that is, poor air quality) can detect harmful fire and facilitate its early extinguishment, rapid detection of poor quality data can highlight data entry or ingestion errors, faulty logic, insufficient or inaccurate business rules, or process failure. Aspects of data quality--such as availability, completeness, correctness, and timeliness--should be assessed against stated requirements that account for the scope, objective, and intended use of data products. A single outlier, an accidentally locked data set, or even subtle modifications to a data structure can cause a robust extract-transform-load (ETL) infrastructure to grind to a halt or produce invalid results. Thus, a mature data infrastructure should incorporate quality assurance methods that facilitate robust processing and quality data products, as well as quality control methods that monitor and validate data products against their stated requirements. The SAS® Smoke Detector represents a scalable, generalizable solution that assesses the availability, completeness, and structure of persistent SAS data sets, ideal for finished data products or transactional data sets received with standardized frequency and format. Like a smoke detector, the quality control dashboard is not intended to discover the source of the blaze, but rather to sound an alarm to stakeholders that data have been modified, locked, deleted, or otherwise corrupted. Through rapid detection and response, the fidelity of data is increased as well as the responsiveness of developers to threats to data quality and validity.
Troy Hughes, Datmesis Analytics
Working with multiple data sources in SAS® was not a straight forward thing until PROC FEDSQL was introduced in the SAS® 9.4 release. Federated Query Language, or FEDSQL, is a vendor-independent language that provides a common SQL syntax to communicate across multiple relational databases without having to worry about vendor-specific SQL syntax. PROC FEDSQL is a SAS implementation of the FEDSQL language. PROC FEDSQL enables us to write federated queries that can be used to perform joins on tables from different databases with a single query, without having to worry about loading the tables into SAS individually and combining them using DATA steps and PROC SQL statements. The objective of this paper is to demonstrate the working of PROC FEDSQL to fetch data from multiple data sources such as Microsoft SQL Server database, MySQL database, and a SAS data set, and run federated queries on all the data sources. Other powerful features of PROC FEDSQL such as transactions and FEDSQL pass-through facility are discussed briefly.
Zabiulla Mohammed, Oklahoma State University
Ganesh Kumar Gangarajula, Oklahoma State University
Pradeep Reddy Kalakota, Federal Home Loan Bank of Desmoines
Many retail and consumer packaged goods (CPG) companies are now keeping track of what their customers purchased in the past, often through some form of loyalty program. This record keeping is one example of how modern corporations are building data sets that have a panel structure, a data structure that is also pervasive in insurance and finance organizations. Panel data (sometimes called longitudinal data) can be thought of as the joining of cross-sectional and time series data. Panel data enable analysts to control for factors that cannot be considered by simple cross-sectional regression models that ignore the time dimension. These factors, which are unobserved by the modeler, might bias regression coefficients if they are ignored. This paper compares several methods of working with panel data in the PANEL procedure and discusses how you might benefit from using multiple observations for each customer. Sample code is available.
Bobby Gutierrez, SAS
Kenneth Sanford, SAS
Managing and organizing external files and directories play an important part in our data analysis and business analytics work. A good file management system can streamline project management and file organizations and significantly improve work efficiency . Therefore, under many circumstances, it is necessary to automate and standardize the file management processes through SAS® programming. Compared with managing SAS files via PROC DATASETS, managing external files is a much more challenging task, which requires advanced programming skills. This paper presents and discusses various methods and approaches to managing external files with SAS programming. The illustrated methods and skills can have important applications in a wide variety of analytic work fields.
Justin Jia, Trans Union
Amanda Lin, CIBC
Everyone likes getting a raise, and using ARRAYs in SAS® can help you do just that! Using ARRAYs simplifies processing, allowing for reading and analyzing of repetitive data with minimum coding. Improving the efficiency of your coding and in turn, your SAS productivity, is easier than you think! ARRAYs simplify coding by identifying a group of related variables that can then be referred to later in a DATA step. In this quick tip, you learn how to define an ARRAY using an array statement that establishes the ARRAY name, length, and elements. You also learn the criteria and limitations for the ARRAY name, the requirements for array elements to be processed as a group, and how to call an ARRAY and specific array elements within a DATA step. This quick tip reveals useful functions and operators, such as the DIM function and using the OF operator within existing SAS functions, that make using ARRAYs an efficient and productive way to process data. This paper takes you through an example of how to do the same task with and without using ARRAYs in order to illustrate the ease and benefits of using them. Your coding will be more efficient and your data analysis will be more productive, meaning you will deserve ARRAYs!
Kate Burnett-Isaacs, Statistics Canada
We all know there are multiple ways to use SAS® language components to generate the same values in data sets and output (for example, using the DATA step versus PROC SQL, If-Then-Elses versus Format table conversions, PROC MEANS versus PROC SQL summarizations, and so on). However, do you compare those different ways to determine which are the most efficient in terms of computer resources used? Do you ever consider the time a programmer takes to develop or maintain code? In addition to determining efficient syntax, do you validate your resulting data sets? Do you ever check data values that must remain the same after being processed by multiple steps and verify that they really don't change? We share some simple coding techniques that have proven to save computer and human resources. We also explain some data validation and comparison techniques that ensure data integrity. In our distributed computing environment, we show a quick way to transfer data from a SAS server to a local client by using PROC DOWNLOAD and then PROC EXPORT on the client to convert the SAS data set to a Microsoft Excel file.
Jason Beene, WellsFargo
Mary Katz, Wells Fargo Bank
How often have you pulled oodles of data out of the corporate data warehouse down into SAS® for additional processing? This additional processing, sometimes thought to be uniquely SAS, might include FIRST. logic, cumulative totals, lag functionality, specialized summarization, or advanced date manipulation. Using the analytical (or OLAP) and Windowing functionality available in many databases (for example, in Teradata and IBM Netezza ), all of this processing can be performed directly in the database without moving and reprocessing detail data unnecessarily. This presentation illustrates how to increase your coding and execution efficiency by using the database's power through your SAS environment.
Harry Droogendyk, Stratia Consutling Inc.
As SAS® products become more web-oriented and sophisticated, SAS administrators face an increased challenge to manage their SAS middle-tier environments. They want to know the answers to important critical questions when planning, installing, configuring, deploying, and administrating their SAS products. They also need to meet the requirements of high performance, high availability, increased security, maintainability, and more. In this paper, we identify the most common and challenging questions that most of our administrators and customers have asked. These questions range across topics such as SAS middle-tier architecture, clustering, performance, security, and administration using SAS® Environment Manger. These questions come from many sources such as technical support, consultants, and internal customer experience testing teams. The specific questions include: what is new in SAS 9.4 mid-tier infrastructure and why that is better for me; should I use the SAS Web Server or can I use another third party Web Server in my deployment; where can I deploy customer dynamic web applications and static contents; what are the SAS JRE, SAS Web Server, SAS Web Application Server upgrade policy and process; how to architect and configure to achieve High Availability for EBI and VA; how to install, update or add my products for cluster members; how can I tune the mid-tier performance and improve the start-up time of my SAS Web Application Server; what options are available for configuring SSL; what is the security policy, what security patches are available and how to apply them; how can I manage my mid-tier infrastructure and applications and how the user and account are managed in SAS Environment Manager? The paper will present detailed answers for these questions and also point out where you can find more information. We believe that with the answers to these questions, you, SAS administrators, can better implement and manage your SAS environment with a higher confide
nce and satisfaction.
Zhiyong Li, SAS
Mike Thorland, SAS