SAS® Visual Analytics provides self-service capabilities for users to analyze, explore, and report on their own data. As users explore their data, there is always a need to bring in more data sources, create new variables, combine data from multiple sources, and even update your data occasionally. SAS Visual Analytics provides targeted user capabilities to access, modify, and enhance data suitable for specific business needs. This paper provides a clear understanding of these capabilities and suggests best practices for self-service data management in SAS Visual Analytics.
Gregor Herrmann, SAS
In 2013, the University of North Carolina (UNC) at Chapel Hill initiated enterprise-wide use of SAS® solutions for reporting and data transformations. Just over one year later, the initial rollout was scheduled to go live to an audience of 5,500 users as part of an adoption of PeopleSoft ERP for Finance, Human Resources, Payroll, and Student systems. SAS® Visual Analytics was used for primary report delivery as an embedded resource within the UNC Infoporte, an existing portal. UNC made the date. With the SAS solutions, UNC delivered the data warehouse and initial reports on the same day that the ERP systems went live. After the success of the initial launch, UNC continues to develop and evolve the solution with additional technologies, data, and reports. This presentation touches on a few of the elements required for a medium to large size organization to integrate SAS solutions such as SAS Visual Analytics and SAS® Enterprise Business Intelligence within their infrastructure.
Jonathan Pletzke, UNC Chapel Hill
In the very near future you will likely encounter Hadoop. It is rapidly displacing database management systems in the corporate world and is rearing its head in the SAS® world. If you think now is the time to learn how to use SAS with Hadoop, you are in luck. This workshop is the jump start you need. This workshop introduces Hadoop and shows you how to access it by using SAS/ACCESS® Interface to Hadoop. During the workshop, we show you how to do the following: how to configure your SAS environment so that you can access Hadoop data; how to use the Hadoop FILENAME statement; how to use the HADOOP procedure; and how to use SAS/ACCESS Interface to Hadoop (including performance tuning).
Diane Hatcher, SAS
This presentation is an open-ended discussion about techniques for transferring data and analytical results from SAS® to Microsoft Excel. There are some introductory comments, but this presentation does not have any set content. Instead, the topics discussed are dictated by attendee questions. Come prepared to ask and get answers to your questions. To submit your questions or suggestions for discussion in advance, go to http://support.sas.com/surveys/askvince.html.
Vince DelGobbo, SAS
The use of administrative databases for understanding practice patterns in the real world has become increasingly apparent. This is essential in the current health-care environment. The Affordable Care Act has helped us to better understand the current use of technology and different approaches to surgery. This paper describes a method for extracting specific information about surgical procedures from the Healthcare Cost and Utilization Project (HCUP) database (also referred to as the National (Nationwide) Inpatient Sample (NIS)).The analyses provide a framework for comparing the different modalities of surgerical procedures of interest. Using an NIS database for a single year, we want to identify cohorts based on surgical approach. We do this by identifying the ICD-9 codes specific to robotic surgery, laparoscopic surgery, and open surgery. After we identify the appropriate codes using an ARRAY statement, a similar array is created based on the ICD-9 codes. Any minimally invasive procedure (robotic or laparoscopic) that results in a conversion is flagged as a conversion. Comorbidities are identified by ICD-9 codes representing the severity of each subject and merged with the NIS inpatient core file. Using a FORMAT statement for all diagnosis variables, we create macros that can be regenerated for each type of complication. These created macros are compiled in SAS® and stored in the library that contains the four macros that are called by tables. They call the macros for different macros variables. In addition, they create the frequencies of all cohorts and create the table structure with the title and number of the table. This paper describes a systematic method in SAS/STAT® 9.2 to extract the data from NIS using the ARRAY statement for the specific ICD-9 codes, to format the extracted data for the analysis, to merge the different NIS databases by procedures, and to use automatic macros to generate the report.
Ravi Tejeshwar Reddy Gaddameedi, California State University,Eastbay
Usha Kreaden, Intuitive Surgical
To bring order to the wild world of big data, EMC and its partners have joined forces to meet customer challenges and deliver a modern analytic architecture. This unified approach encompasses big data management, analytics discovery and deployment via end-to-end solutions that solve your big data problems. They are also designed to free up more time for innovation, deliver faster deployments, and help you find new insights from secure and properly managed data. The EMC Business Data Lake is a fully-engineered, enterprise-grade data lake built on a foundation of core data technologies. It provides pre-configured building blocks that enable self-service, end-to-end integration, management and provisioning of the entire big data environment. Major benefits include the ability to make more timely and informed business decisions and realize the vision of analytics in weeks instead of months.SAS enhances the Federation Business Data Lake by providing superior breadth and depth of analytics to tackle any big data analytics problem an organization might have, whether it's fraud detection, risk management, customer intelligence, predictive assets maintenance and others. SAS and EMC work together to deliver a robust and comprehensive big data solution with reduced risk, automated provisioning and configuration and is purpose-built for big data analytics workloads.
Casey James, EMC
As organizations strive to do more with fewer resources, many modernize their disparate PC operations to centralized server deployments. Administrators and users share many concerns about using SAS® on a Microsoft Windows server. This paper outlines key guidelines, plus architecture and performance considerations, that are essential to making a successful transition from PC to server. This paper outlines the five key considerations for SAS customers who will change their configuration from PC-based SAS to using SAS on a Windows server: 1) Data and directory references; 2) Interactive and surrounding applications; 3) Usability; 4) Performance; 5) SAS Metadata Server.
Kate Schwarz, SAS
Donna Bennett, SAS
Margaret Crevar, SAS
While there has been tremendous progress in technologies related to data storage, high-performance computing, and advanced analytic techniques, organizations have only recently begun to comprehend the importance of parallel strategies that help manage the cacophony of concerns around access, quality, provenance, data sharing, and use. While data governance is not new, the drumbeat around it, along with master data management and data quality, is approaching a crescendo. Intensified by the increase in consumption of information, expectations about ubiquitous access, and highly dynamic visualizations, these factors are also circumscribed by security and regulatory constraints. In this paper, we provide a summary of what data governance is and its importance. We go beyond the obvious and provide practical guidance on what it takes to build out a data governance capability appropriate to the scale, size, and purpose of the organization and its culture. Moreover, we discuss best practices in the form of requirements that highlight what we think is important to consider as you provide that tactical linkage between people, policies, and processes to the actual data lifecycle. To that end, our focus includes the organization and its culture, people, processes, policies, and technology. Further, we include discussions of organizational models as well as the role of the data steward, and provide guidance on how to formalize data governance into a sustainable set of practices within your organization.
Greg Nelson, ThotWave
Lisa Dodson, SAS
SAS® blogs (hosted at http://blogs.sas.com/content) attract millions of page views annually. With hundreds of authors, thousands of posts, and constant chatter within the blog comments, it's impossible for one person to keep track of all of the activity. In this paper, you learn how SAS technology is used to gather data and report on SAS blogs from the inside out. The beneficiaries include personnel from all over the company, including marketing, technical support, customer loyalty, and executives. The author describes the business case for tracking and reporting on the activity of blogging. You learn how SAS tools are used to access the WordPress database and how to create a 'blog data mart' for reporting and analytics. The paper includes specific examples of the insight that you can gain from examining the blogs analytically, and which techniques are most useful for achieving that insight. For example, the blog transactional data are combined with social media metrics (also gathered by using SAS) to show which blog entries and authors yield the most engagement on Twitter, Facebook, and LinkedIn. In another example, we identified the growing trend of 'blog comment spam' on the SAS blog properties and measured its cost to the business. These metrics helped to justify the investment in a solution. Many of the tools used are part of SAS® Foundation, including SAS/ACCESS®, the DATA step and SQL, PROC REPORT, PROC SGPLOT, and more. The results are shared in static reports, automated daily email summaries, dynamic reports hosted in SAS/IntrNet®, and even a corporate dashboard hosted in SAS® Visual Analytics.
Chris Hemedinger, SAS
Your electricity usage patterns reveal a lot about your family and routines. Information collected from electrical smart meters can be mined to identify patterns of behavior that can in turn be used to help change customer behavior for the purpose of altering system load profiles. Demand Response (DR) programs represent an effective way to cope with rising energy needs and increasing electricity costs. The Federal Energy Regulatory Commission (FERC) defines demand response as changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to lower electricity use at times of high wholesale market prices or when system reliability of jeopardized. In order to effectively motivate customers to voluntarily change their consumptions patterns, it is important to identify customers whose load profiles are similar so that targeted incentives can be directed toward these customers. Hence, it is critical to use tools that can accurately cluster similar time series patterns while providing a means to profile these clusters. In order to solve this problem, though, hardware and software that is capable of storing, extracting, transforming, loading and analyzing large amounts of data must first be in place. Utilities receive customer data from smart meters, which track and store customer energy usage. The data collected is sent to the energy companies every fifteen minutes or hourly. With millions of meters deployed, this quantity of information creates a data deluge for utilities, because each customer generates about three thousand data points monthly, and more than thirty-six billion reads are collected annually for a million customers. The data scientist is the hunter, and DR candidate patterns are the prey in this cat-and-mouse game of finding customers willing to curtail electrical usage for a program benefit. The data scientist must connect large siloed data sources, external data
, and even unstructured data to detect common customer electrical usage patterns, build dependency models, and score them against their customer population. Taking advantage of Hadoop's ability to store and process data on commodity hardware with distributed parallel processing is a game changer. With Hadoop, no data set is too large, and SAS® Visual Statistics leverages machine learning, artificial intelligence, and clustering techniques to build descriptive and predictive models. All data can be usable from disparate systems, including structured, unstructured, and log files. The data scientist can use Hadoop to ingest all available data at rest, and analyze customer usage patterns, system electrical flow data, and external data such as weather. This paper will use Cloudera Hadoop with Apache Hive queries for analysis on platforms such as SAS® Visual Analytics and SAS Visual Statistics. The paper will showcase optionality within Hadoop for querying large data sets with open-source tools and importing these data into SAS® for robust customer analytics, clustering customers by usage profiles, propensity to respond to a demand response event, and an electrical system analysis for Demand Response events.
Kathy Ball, SAS
In-database processing refers to the integration of advanced analytics into the data warehouse. With this capability, analytic processing is optimized to run where the data reside, in parallel, without having to copy or move the data for analysis. From a data governance perspective there are many good reasons to embrace in-database processing. Many analytical computing solutions and large databases use this technology because it provides significant performance improvements over more traditional methods. Come learn how Blue Cross Blue Shield of Tennessee (BCBST) uses in-database processing from SAS and Teradata.
Harold Klagstad, BlueCross BlueShield of TN
It is a common task to reshape your data from long to wide for the purpose of reporting or analytical modeling and PROC TRANSPOSE provides a convenient way to accomplish this. However, when performing the transpose action on large tables stored in a database management system (DBMS) such as Teradata, the performance of PROC TRANSPOSE can be significantly compromised. In this case, it is more efficient for the DBMS to perform the transpose task. SAS® provides in-database processing technology in PROC SQL, which allows the SQL explicit pass-through method to push some or all of the work to the DBMS. This technique has facilitated integration between SAS and a wide range of data warehouses and databases, including Teradata, EMC Greenplum, IBM DB2, IBM Netezza, Oracle, and Aster Data. This paper uses the Teradata database as an example DBMS and explains how to transpose a large table that resides in it using the SQL explicit pass-through method. The paper begins with comparing the execution time using PROC TRANSPOSE with the execution time using SQL explicit pass-through. From this comparison, it is clear that SQL explicit pass-through is more efficient than the traditional PROC TRANSPOSE when transposing Teradata tables, especially large tables. The paper explains how to use the SQL explicit pass-through method and discusses the types of data columns that you might need to transpose, such as numeric and character. The paper presents a transpose solution for these types of columns. Finally, the paper provides recommendations on packaging the SQL explicit pass-through method by embedding it in a macro. SAS programmers who are working with data stored in an external DBMS and who would like to efficiently transpose their data will benefit from this paper.
Tao Cheng, Accenture
Researchers, patients, clinicians, and other health-care industry participants are forging new models for data-sharing in hopes that the quantity, diversity, and analytic potential of health-related data for research and practice will yield new opportunities for innovation in basic and translational science. Whether we are talking about medical records (for example, EHR, lab, notes), administrative data (claims and billing), social (on-line activity), behavioral (fitness trackers, purchasing patterns), contextual (geographic, environmental), or demographic data (genomics, proteomics), it is clear that as health-care data proliferates, threats to security grow. Beginning with a review of the major health-care data breeches in our recent history, we highlight some of the lessons that can be gleaned from these incidents. In this paper, we talk about the practical implications of data sharing and how to ensure that only the right people have the right access to the right level of data. To that end, we explore not only the definitions of concepts like data privacy, but we discuss, in detail, methods that can be used to protect data--whether inside our organization or beyond its walls. In this discussion, we cover the fundamental differences between encrypted data, 'de-identified', 'anonymous', and 'coded' data, and methods to implement each. We summarize the landscape of maturity models that can be used to benchmark your organization's data privacy and protection of sensitive data.
Greg Nelson, ThotWave
One of the hallmarks of a good or great SAS® program is that it requires only a minimum of upkeep. Especially for code that produces reports on a regular basis, it is preferable to minimize user and programmer input and instead have the input data drive the outputs of a program. Data-driven SAS programs are more efficient and reliable, require less hardcoding, and result in less downtime and fewer user complaints. This paper reviews three ways of building a SAS program to create regular Microsoft Excel reports; one method using hardcoded variables, another using SAS keyword macros, and the last using metadata to drive the reports.
Andrew Clapson, MD Financial Management
As a part of regulatory compliance requirements, banks are required to submit reports based on Microsoft Excel, as per templates supplied by the regulators. This poses several challenges, including the high complexity of templates, the fact that implementation using ODS can be cumbersome, and the difficulty in keeping up with regulatory changes and supporting dynamic report content. At the same time, you need the flexibility to customize and schedule these reports as per your business requirements. This paper discusses an approach to building these reports using SAS® XML Mapper and the Excel XML spreadsheet format. This approach provides an easy-to-use framework that can accommodate template changes from the regulators without needing to modify the code. It is implemented using SAS® technologies, providing you the flexibility to customize to your needs. This approach also provides easy maintainability.
Sarita Kannarath, SAS
Phil Hanna, SAS
Amitkumar Nakrani, SAS
Nishant Sharma, SAS
For the many relational database products that SAS/ACCESS®supports (Oracle, Teradata, DB2, MySQL, SQL Server, Hadoop, Greenplum, PC Files, to name but a few), there are a myriad of RDBMS-specific options at your disposal, but how do you know the right options for any given situation? How much data should you transfer at a time? Which SAS® functions can be passed through to the database--which cannot? How do you verify that your processes are running efficiently? How do you test and validate any changes? The answer lies with the feedback capabilities of the SASTRACE system option.
Andrew Howell, ANJ Solutions
Are you a SAS® software user hoping to convince your organization to move to the latest product release? Has your management team asked how your organization can hire new SAS users familiar with the latest and greatest procedures and techniques? SAS® Studio and SAS® University Edition might provide the answers for you. SAS University Edition was created for teaching and learning. It's a new downloadable package of selected SAS products (Base SAS®, SAS/STAT®, SAS/IML®, SAS/ACCESS® Interface to PC Files, and SAS Studio) that runs on Windows, Linux, and Mac. With the exploding demand for analytical talent, SAS launched this package to grow the next generation of SAS users. Part of the way SAS is helping grow that next generation of users is through the interface to SAS University Edition: SAS Studio. SAS Studio is a developmental web application for SAS that you access through your web browser and--since the first maintenance release of SAS 9.4--is included in Base SAS at no additional charge. The connection between SAS University Edition and commercial SAS means that it's easier than ever to use SAS for teaching, research, and learning, from high schools to community colleges to universities and beyond. This paper describes the product, as well as the intent behind it and other programs that support it, and then talks about some successes in adopting SAS University Edition to grow the next generation of users.
Polly Mitchell-Guthrie, SAS
Amy Peters, SAS
Is your company using or considering using SAP Business Warehouse (BW) powered by SAP HANA? SAS® provides various levels of integration with SAP BW in an SAP HANA environment. This integration enables you to not only access SAP BW components from SAS, but to also push portions of SAS analysis directly into SAP HANA, accelerating predictive modeling and data mining operations. This paper explains the SAS toolset for different integration scenarios, highlights the newest technologies contributing to integration, and walks you through examples of using SAS with SAP BW on SAP HANA. The paper is targeted at SAS and SAP developers and architects interested in building a productive analytical environment with the help of the latest SAS and SAP collaborative advancements.
Tatyana Petrova, SAS
The Hadoop ecosystem is vast, and there's a lot of conflicting information available about how to best secure any given implementation. It's also difficult to fix any mistakes made early on once an instance is put into production. In this paper, we demonstrate the currently accepted best practices for securing and Kerberizing Hadoop clusters in a vendor-agnostic way, review some of the not-so-obvious pitfalls one could encounter during the process, and delve into some of the theory behind why things are the way they are.
Evan Kinney, SAS
A leading killer in the United States is smoking. Moreover, over 8.6 million Americans live with a serious illness caused by smoking or second-hand smoking. Despite this, over 46.6 million U.S. adults smoke tobacco, cigars, and pipes. The key analytic question in this paper is, How would e-cigarettes affect this public health situation? Can monitoring public opinions of e-cigarettes using SAS® Text Analytics and SAS® Visual Analytics help provide insight into the potential dangers of these new products? Are e-cigarettes an example of Big Tobacco up to its old tricks or, in fact, a cessation product? The research in this paper was conducted on thousands of tweets from April to August 2014. It includes API sources beyond Twitter--for example, indicators from the Health Indicators Warehouse (HIW) of the Centers for Disease Control and Prevention (CDC)--that were used to enrich Twitter data in order to implement a surveillance system developed by SAS® for the CDC. The analysis is especially important to The Office of Smoking and Health (OSH) at the CDC, which is responsible for tobacco control initiatives that help states to promote cessation and prevent initiation in young people. To help the CDC succeed with these initiatives, the surveillance system also: 1) automates the acquisition of data, especially tweets; and 2) applies text analytics to categorize these tweets using a taxonomy that provides the CDC with insights into a variety of relevant subjects. Twitter text data can help the CDC look at the public response to the use of e-cigarettes, and examine general discussions regarding smoking and public health, and potential controversies (involving tobacco exposure to children, increasing government regulations, and so on). SAS® Content Categorization helps health care analysts review large volumes of unstructured data by categorizing tweets in order to monitor and follow what people are saying and why they are saying it. Ultimatel
y, it is a solution intended to help the CDC monitor the public's perception of the dangers of smoking and e-cigarettes, in addition, it can identify areas where OSH can focus its attention in order to fulfill its mission and track the success of CDC health initiatives.
Manuel Figallo, SAS
Emily McRae, SAS
Every day, companies all over the world are moving their data into the Cloud. While there are many options available, much of this data will wind up in Amazon Redshift. As a SAS® user, you are probably wondering, 'What is the best way to access this data using SAS?' This paper discusses the many ways that you can use SAS/ACCESS® to get to Amazon Redshift. We compare and contrast the various approaches and help you decide which is best for you. Topics that are discussed are building a connection, choosing appropriate data types, and SQL functions.
James (Ke) Wang, SAS
Salman Maher, SAS
SAS® users organize their applications in a variety of ways. However, there are some approaches that are more successful, and some that are less successful. In particular, the need to process some of the code some of the time in a file is sometimes challenging. Reproducible research methods require that SAS applications be understandable by the author and other staff members. In this presentation, you learn how to organize and structure your SAS application to manage the process of data access, data analysis, and data presentation. The approach to structure applications requires that tasks in the process of data analysis be compartmentalized. This can be done using a well-defined program. The author presents his structuring algorithm, and discusses the characteristics of good structuring methods for SAS applications. Reproducible research methods are becoming more centrally important, and SAS users must keep up with the current developments.
Paul Thomas, ASUP Ltd
A maximum harvest in farming analytics is achieved only if analytics can also be operationalized at the level of core business applications. Mapped to the use of SAS® Analytics, the fruits of SAS be shared with Enterprise Business Applications by SAP. Learn how your SAS environment, including the latest of SAS® In-Memory Analytics, can be integrated with SAP applications based on the SAP In-Memory Platform SAP HANA. We'll explore how a SAS® Predictive Modeling environment can be embedded inside SAP HANA and how native SAP HANA data management capabilities such as SAP HANA Views, Smart Data Access, and more can be leveraged by SAS applications and contribute to an end-to-end in-memory data management and analytics platform. Come and see how you can extend the reach of your SAS® Analytics efforts with the SAP HANA integration!
Morgen Christoph, SAP SE
Drawing on the results from machine learning, exploratory statistics, and a variety of related methodologies, data analytics is becoming one of the hottest areas in a variety of global industries. The utility and application of these analyses have been extremely impressive and have led to successes ranging from business value generation to hospital infection control applications. This presentation examines the philosophical foundations epistemology associated with scientific discovery and shows whether the currently used analytics techniques rest on a rational philosophy of science. Examples are provided to assist in making the concepts more concrete to the business and scientific user.
Mike Hardin, The University of Alabama
Interactive Voice Response (IVR) systems are likely one of the best and worst gifts to the world of communication, depending on who you ask. Businesses love IVR systems because they take out hundreds of millions of dollars of call center costs in automation of routine tasks, while consumers hate IVRs because they want to talk to an agent! It is a delicate balancing act to manage an IVR system that saves money for the business, yet is smart enough to minimize consumer abrasion by knowing who they are, why they are calling, and providing an easy automated solution or a quick route to an agent. There are many aspects to designing such IVR systems, including engineering, application development, omni-channel integration, user interface design, and data analytics. For larger call volume businesses, IVRs generate terabytes of data per year, with hundreds of millions of rows per day that track all system and customer- facing events. The data is stored in various formats and is often unstructured (lengthy character fields that store API return information or text fields containing consumer utterances). The focus of this talk is the development of a data mining framework based on SAS® that is used to parse and analyze IVR data in order to provide insights into usability of the application across various customer segments. Certain use cases are also provided.
Dmitriy Khots, West Corp
Many papers have been written over the years that describe how to use Dynamic Data Exchange (DDE) to pass data from SAS® to Excel. This presentation aims to show you how to do the same exchange with the SAS Output Delivery System (ODS) and the TEMPLATE Procedure.
Peter Timusk, Statistics Canada
Many freshmen leave their first college and go on to attend another institution. Some of these students are even successful in earning degrees elsewhere. As there is more focus on college graduation rates, this paper shows how the power of SAS® can pull in data from many disparate sources, including the National Student Clearinghouse, to answer questions on the minds of many institutional researchers. How do we use the data to answer questions such as What would my graduation rate be if these students graduated at my institution instead of at another one?', What types of schools do students leave to attend? , and Are there certain characteristics of students who leave, and are they concentrated in certain programs? The data-handling capabilities of SAS are perfect for this type of analysis, and this presentation walks you through the process.
Stephanie Thompson, Datamum
Working with multiple data sources in SAS® was not a straight forward thing until PROC FEDSQL was introduced in the SAS® 9.4 release. Federated Query Language, or FEDSQL, is a vendor-independent language that provides a common SQL syntax to communicate across multiple relational databases without having to worry about vendor-specific SQL syntax. PROC FEDSQL is a SAS implementation of the FEDSQL language. PROC FEDSQL enables us to write federated queries that can be used to perform joins on tables from different databases with a single query, without having to worry about loading the tables into SAS individually and combining them using DATA steps and PROC SQL statements. The objective of this paper is to demonstrate the working of PROC FEDSQL to fetch data from multiple data sources such as Microsoft SQL Server database, MySQL database, and a SAS data set, and run federated queries on all the data sources. Other powerful features of PROC FEDSQL such as transactions and FEDSQL pass-through facility are discussed briefly.
Zabiulla Mohammed, Oklahoma State University
Ganesh Kumar Gangarajula, Oklahoma State University
Pradeep Reddy Kalakota, Federal Home Loan Bank of Desmoines
Managing and organizing external files and directories play an important part in our data analysis and business analytics work. A good file management system can streamline project management and file organizations and significantly improve work efficiency . Therefore, under many circumstances, it is necessary to automate and standardize the file management processes through SAS® programming. Compared with managing SAS files via PROC DATASETS, managing external files is a much more challenging task, which requires advanced programming skills. This paper presents and discusses various methods and approaches to managing external files with SAS programming. The illustrated methods and skills can have important applications in a wide variety of analytic work fields.
Justin Jia, Trans Union
Amanda Lin, CIBC
How often have you pulled oodles of data out of the corporate data warehouse down into SAS® for additional processing? This additional processing, sometimes thought to be uniquely SAS, might include FIRST. logic, cumulative totals, lag functionality, specialized summarization, or advanced date manipulation. Using the analytical (or OLAP) and Windowing functionality available in many databases (for example, in Teradata and IBM Netezza ), all of this processing can be performed directly in the database without moving and reprocessing detail data unnecessarily. This presentation illustrates how to increase your coding and execution efficiency by using the database's power through your SAS environment.
Harry Droogendyk, Stratia Consutling Inc.