Free Essay

Server Analytics

In: Computers and Technology

Submitted By stefano
Words 5168
Pages 21
Thought Leadership White Paper

2

Reengineering IT discovery with analytics and visualization

Contents 2 Introduction 3 The inevitable push towards greater efficiency 3 The need for better IT discovery 4 Building a more comprehensive snapshot of the data center 6 Changing the parameters for IT discovery 6 How ALDM works 8 Identifying issues that hinder operational efficiency and resilience 9 Compiling affinity groups automatically 11 Identifying the best candidates for virtualization 12 Extending insights with data visualization 12 The confluence of discovery analytics and human analysis 15 Conclusion 16 For more information

Introduction
An intimate knowledge of IT assets and dependencies has always been imperative to mitigating the risk of data center migrations and improving the resiliency of the IT environment. But the IT discovery process can be slow, costly and prone to error. And for all their value in helping organizations determine where and how to plan a migration or improve IT resiliency, traditional asset inventories and dependency maps provide only part of the picture.

With modern IT infrastructures an intricate web of interdependencies, uncovering the total IT environment, including the logical relationships between physical, virtual and cloud elements, has never been more important—or more complex. IBM’s analytics for logical dependency mapping,
ALDM, reengineers the IT discovery process to provide a more complete and accurate view of the IT infrastructure. It is designed to facilitate data center migration planning but also provide insights for companies looking to optimize or improve the resiliency of their IT environment. Developed by IBM
Research, ALDM uses analytics and advanced mathematical modeling to simplify IT inventories and dependency mapping, while extending data collection capabilities to deliver new insights about the infrastructure.
ALDM reads server configuration files to reveal dependencies that would otherwise go undetected. It fills information gaps and exposes platform anomalies that cannot be seen with many other discovery tools. But ALDM also delivers actionable insights with greater speed and rich visualization. It accelerates discovery by as much as 30 to 40 percent using automation and the scalable processing power of the cloud. And it replaces obscure, one-dimensional dependency maps with interactive infographics that enable companies to dynamically navigate complex IT environments and gain deeper insights into the configuration and operation of the infrastructure.

Data Center Services

ALDM also replaces much of the labor-intensive manual analysis that has become synonymous with IT discovery.
Affinity groups identify dependent assets that should be migrated together to avoid business application outages.
Resource utililization trending data suggests assets that could be rationalized, virtualized or consolidated to improve operational efficiency. These and other insights are automatically generated and delivered cost-effectively, enabling
IT managers to initiate the discovery process for a single platform or the entire IT infrastructure.

The inevitable push towards greater efficiency
Today’s data center is the veritable nerve center of the enterprise, with the infrastructure powering virtually every aspect of the business. With IT continuously challenged to optimize resource usage, conserve energy and reduce costs, infrastructure efficiency is always top of mind—and with good reason. IBM’s 2012 Data Center Study found that highly efficient data centers are able to shift 50 percent more of the IT budget to new projects.1 These IT organizations are spending less time maintaining the infrastructure and more time innovating.
Rapid advances in technology drive efficiency improvements, but business growth and change make them essential. After years of largely unfettered expansion, most IT infrastructures are highly heterogeneous and fragmented, with multivendor hardware, software and standards. Excessive energy consumption, hardware and software redundancy, and poor utilization are frequently the norm, resulting in higher costs and greater environmental impact than necessary.

3

For many companies, the quest for greater IT efficiency begins with consolidation and virtualization to increase capacity and availability while creating a smaller footprint. Standardization and automation tend to follow, with cloud computing increasingly in the mix and data center relocation and IT resiliency optimization initiatives often necessary to achieve desired efficiency objectives.
Whatever choices companies make, one thing is clear. They need to understand the existing IT environment before they can make decisions about relocating or optimizing it. Data center initiatives of any magnitude can expose the business to significant risks, and smooth transitions depend on a complete and accurate picture of the infrastructure.

The need for better IT discovery
With the IT environment continually changing with the addition of new equipment and applications, even inventory data collected a few weeks ago can become out of date quickly.
While numerous commercially available tools have been developed to automate the IT discovery process, most have been designed to monitor the IT infrastructure continuously.
These tools typically collect data over a longer period than is necessary for many data center migrations and IT resiliency optimization efforts. Further, because they can be hard to configure and manage and expensive to run, they are usually installed on a limited number of production servers, collecting only a subset of the information needed for a typical migration or enterprise-wide upgrade. Similarly, homegrown discovery tools developed to assess the IT environment for a specific business purpose, like an application upgrade, may not provide all of the information needed for more encompassing IT initiatives.

4

Reengineering IT discovery with analytics and visualization

Despite the availability of automated discovery tools, most data center inventories are still conducted manually by interviewing platform owners one at a time. The process is time-consuming and error-prone, relying on assigned IT personnel to correctly document every logical dependency flow between servers, middleware and applications. Needless to say, these inventories are only as good as the information collected. In fact, IBM’s experience with hundreds of client engagements found that manual inventories are usually only 40 to 60 percent accurate.

Building a more comprehensive snapshot of the data center

ALDM identifies dependencies that are observed during the scanning period, but also dependencies that are configured and not observed.

ALDM determines dependencies by scanning a server’s network connections, but also by examining the server’s configuration files. It not only identifies the middleware that runs on the server, it identifies other machines—virtual as well as physical—that the server is configured to communicate with. So, ALDM identifies dependencies that are observed during the scanning period, but also dependencies that are configured and not observed. Because it doesn’t have to see server dependencies in action to know they exist, it can provide a more complete view of dependencies, and it can do it in a few days.

Without a complete picture of IT assets and their dependencies, an IT manager’s optimization efforts are handicapped. Data center migrations can be particularly difficult because IT managers are unable to factor all of the dependencies into their scheduled equipment moves. As such, they are more likely to overlook or even retire assets that are still required by the operation. Such inadvertent changes can result in costly business disruptions and outages. And the longer it takes to detect them, the more devastating their impact on the business.

Advances in analytics, automation and data visualization have paved the way for a new kind of IT discovery, one that is more compatible with the large, distributed and complex nature of today’s data center environments. IBM’s analytics for logical dependency mapping, ALDM, dramatically improves current
IT discovery technologies to reengineer the way assets and dependencies are discovered, but also the way analytic insights are delivered to users.

ALDM also provides actionable insights about IT assets that would not otherwise be possible without expending considerable time and effort. Insights like multi-level dependency groupings, which can take weeks of human analysis, are automatically generated. ALDM provides shortcuts to other information as well, including resource utilization statistics, which can be helpful in data center consolidation and server virtualization and consolidation efforts.

Data Center Services

ALDM’s dependency maps provide a panoramic view of an organization’s IT infrastructure. Its data visualization capability brings the maps to life, enabling users to interact with discovered data. Visualization facilitates exploration by allowing users to drill down for additional detail about

an asset’s attributes and dependencies using infinitely customizable filters and dynamic rendering (Figure 1). Users can determine, for example, which assets to upgrade, which to retire and which to relocate, but also which should be virtualized to gain greater efficiency.

Figure 1. Dynamic rendering of the IT environment. A visualized infrastructure is displayed in dots (server nodes) and lines (dependencies). Using touch-screen navigation and customizable filters on the left of the display, users can drill down to focus on specific data, like all web servers, servers made by certain vendors or servers with certain middleware.

5

6

Reengineering IT discovery with analytics and visualization

Changing the parameters for IT discovery
ALDM offers a quicker route to IT discovery insights because its focus is point-in-time discovery. It takes a snapshot of the operation as it exists today, rather providing a continuous view of the infrastructure over time. In so doing, it avoids the hefty overhead costs and management requirements associated with continuously running discovery tools. Therefore, it is more versatile and can be used more frequently to accomplish a wide range of objectives. In fact, it can be run on one server or any combination of servers to assist with:
• Conducting routine inventories to identify and purge unsupported and redundant software and versions that are driving up management and licensing costs
• Preparing the IT infrastructure for the deployment of a new operating system or application
• Determining the fate of specific assets as part of a data center consolidation or migration initiative
• Grouping dependent assets to facilitate migration planning and equipment moves
• Identifying assets that are the best candidates for virtualization • Verifying compliance with established IT operating standards. How ALDM works
ALDM runs directly on a company’s servers. Once the ALDM script is downloaded, it can be copied to select servers and executed using a simple, one-line instruction. Generally, the script is set up to execute every 15 minutes for 5 to 7 days, capturing information from each server on which it is installed.
This time frame is sufficient for most companies; however,
ALDM’s run duration can be modified to support each organization’s own server environment.

The ALDM script runs transparently. Its impact on performance is about 2 to 5 percent the first time it is executed and negligible after that. Moreover, it does not read, copy or collect application or user data, and it does not contain any executables, agents or probes that could pose a security risk.

ALDM runs directly on a company’s servers. Once the ALDM script is downloaded, it can be copied to select servers and executed using a simple, one-line instruction. Static and dynamic data collection
ALDM uses three methods to extract static and dynamic data from the scanned servers. First, ALDM analyzes the server log files to identify any historical dependencies. Second, ALDM reads the server configuration files to identify hardware details like model number and serial number, but also to discover all dependencies the server is configured for. It identifies middleware that has been configured to access a database server, for example, or middleware that has been configured to access other middleware on other servers. This ability to read the configuration files allows ALDM to capture server dependencies that may not be observed during the 5 to 7 day scanning period.

Data Center Services

Finally, ALDM collects dynamic information about the activity taking place between servers. ALDM records observed dependencies by monitoring incoming and outgoing traffic at each port. It also captures resource utilization and other statistics for each scanned server, helping to complete the dependency picture for the IT infrastructure.

Multilevel dependency grouping
ALDM doesn’t just identify dependencies between assets; it identifies multilevel dependencies between them. It can determine that Server A is connected to Server B, but also that Server B is connected to Server C, and other downstream connections. Understanding multilevel dependencies makes it possible to follow asset dependencies as they traverse the infrastructure and to determine how they actually affect the business. More specifically, by enabling IT architects to map applications to servers, multilevel dependency groups provide them with a more application-centric understanding of the infrastructure. Without such a view, it is easy to miscalculate how applications will be impacted by changes they make to the infrastructure.
Multilevel dependency grouping is also a very significant benefit in data center relocations because it enables organizations to more easily determine the impact of moving their applications. IT architects can look at a web server application, for example, and figure out every connected infrastructure element (web server, application server, database server, license server, etc.) that needs to be migrated together to prevent disruption. When adding a new application, IT architects can use ALDM’s multilevel dependency insights to

7

determine how the application is going to connect to the web server and what else the web server connects to that could be affected. They can figure out the impact on the infrastructure and make any necessary adjustments before the application is installed.

ALDM’s backend analytic and visualization engines run in the IBM
SmartCloud Enterprise (SCE) cloud, which provides a scalable environment and support for the powerful analytics needed to process a massive volume of data quickly...
What used to take 3 hours can now be accomplished in less than 10 minutes.
Cloud-based processing and analytics
Once data is collected from a company’s configuration and server log files, it is submitted to IBM for processing. ALDM’s backend analytic and visualization engines run in the IBM
SmartCloud Enterprise (SCE) cloud, which provides a scalable environment and support for the powerful analytics needed to process a massive volume of data quickly. IBM’s preliminary test cases have found the cloud to reduce processing time significantly by an average of 20 times when compared with traditional processing environments.2 What used to take 3 hours can now be accomplished in less than 10 minutes.

8

Reengineering IT discovery with analytics and visualization

ALDM parses all of the data, extracting only the asset, dependency and utilization information needed. Then
ALDM’s algorithms and mathematical modeling go to work, correlating the captured data and going beyond standard analytic calculations and number crunching to produce a set of meaningful insights about the infrastructure. Its discovery analytics improve infrastructure visibility and facilitate the development of more targeted—and ultimately more effective—IT optimization initiatives.

Identifying issues that hinder operational efficiency and resilience
IT can only address problems it knows exist. One of the major benefits of IT discovery is that it frequently reveals issues that could be impeding the day-to-day operation, reducing the resiliency of the infrastructure or lessening the success of consolidation and relocation initiatives. These include

redundant and obsolete operating systems and middleware and assets with unknown server connections. By providing a more current and complete view of the IT infrastructure,
ALDM makes it easy to spot configuration anomalies and discrepancies with established IT and business standards.
These insights are useful in general assessments of the IT architecture because they identify weaknesses and facilitate the prioritization of improvements.
Tabular inventories containing configuration details and installed middleware make it easy to pick out servers that are running redundant or legacy versions of system and middleware (Figures 2 and 3). The cost to monitor and maintain multiple versions can be significant, not to mention the increased security risk if these versions are no longer supported by the manufacturer. IT managers can use the
ALDM inventories to direct the removal of extraneous versions and avoid paying unnecessary licensing fees.

Redundant middleware?

Figure 2. Detailed middleware inventory for each server. ALDM provides asset-specific middleware detail for each scanned server, enabling IT architects to uncover potentially redundant middleware running on the same machine.

Data Center Services

9

Unsupported operating system? Figure 3. High-level view of the server infrastructure. Taking a higher level view across the server inventory enables IT architects to scan for old and unsupported systems.

ALDM dependency maps routinely identify dependent servers that are not known to be connected to other servers (Figure 4).
ALDM is more likely to find these “orphan servers” because of its ability to comb through server configuration files and identify configured dependencies, not just observed dependencies. Servers are often configured for dependencies that are not observed during the IT discovery scanning period. Orphan servers are rarely scanned by ALDM; they are discovered because they are connected to a server that is.

Compiling affinity groups automatically

ALDM inventories and dependency maps can also be used to facilitate new application installations and upgrades. They enable IT architects to get a quick read on the destination server environment and verify that critical middleware and operating system software are already in place to allow for a smooth transition. They also alert IT to configuration issues and dependencies that could impact the application’s performance and availability.

ALDM uses its comprehensive dependency insights to compile dependency groups. These groups identify dependent assets that should be migrated together to avoid application and service outages. The automatic grouping of these assets reduces the painstaking human analysis that is typically required in the assembly of such groups (Figure 5).

The relocation and removal of infrastructure assets make data center consolidations and migrations inherently risky, especially when the cost of a single application outage can average as much as a half million dollars. Combined with customers’ decreasing tolerance for downtime, even planned outages can be very damaging to a company’s brand and bottom line.3 ALDM’s analytics change the game by helping to limit the chance of disruption.

10

Reengineering IT discovery with analytics and visualization

10.xxxx.36 10.xxxx.48
10.xxxx.216 uxxxxu

10.xxxx.214 10.xxxx.113
10.xxxx.89 10.xxxx.179 abxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 cvs 1.11.17 bash 3.0

10.xxxx.214 10.xxxx.180 acxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 bash 3.0

10.xxxx.29 10.xxxx.59
10.xxxx.52.181 a.xxxx.z
Installs:
WebLogic Server
WebLogic Server
NFS
Samba shell script perl 5.8.5 cvs 1.11.17 gcc-c++ 3.4.6 bash 3.0 gcc 3.4.6 bash 3.0

10.xxxx.214 br.xxxx.ca brxxxx.ca rexxxx.br
Installs:
Symantec Antivirus Shares

Installs:
MySQLDatabase
WebLogic Server
NFS
Samba base shell script shell script
Apache HTTP Server 2.0.52 gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 gcc 3.4.6 bash 3.0

Discovered IP and host names

Discovered middleware 10.xxxx.56.106
10.xxxx.41
10.xxxx.8.196
10.xxxx.147.119
10.xxxx.147.104

Discovered
“orphan services”

10.xxxx.5.151
10.xxxx.0.177
10.xxxx.1.224
10.xxxx.104.14

Figure 4. Graphical visualization. ALDM shows directional dependencies and discovered middleware while helping to uncover orphan servers. Servers are colorcoded to facilitate data center migration planning.

However, IBM architects still review the dependency groups for accuracy and suitability, based on IT policy considerations and other insights like client preferences for how the migration should be organized. Once the groups are approved, they are re-labeled affinity groups. Sometimes multiple dependency

groups are combined into a single affinity group, depending on the IT environment and desired migration strategy. Since most IT environments are too big to migrate all at once, affinity groups ease the migration by helping organizations structure their equipment migration in logical segments.

Data Center Services

10.xxxx.36 10.xxxx.48
10.xxxx.216 uxxxxu

10.xxxx.214 10.xxxx.113
10.xxxx.89 10.xxxx.179 abxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 cvs 1.11.17 bash 3.0

10.xxxx.214 10.xxxx.180 acxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 bash 3.0

10.xxxx.29 10.xxxx.59
10.xxxx.52.181 a.xxxx.z
Installs:
WebLogic Server
WebLogic Server
NFS
Samba shell script perl 5.8.5 cvs 1.11.17 gcc-c++ 3.4.6 bash 3.0 gcc 3.4.6 bash 3.0

10.xxxx.214 br.xxxx.ca brxxxx.ca rexxxx.br
Installs:
Symantec Antivirus Shares

Installs:
MySQLDatabase
WebLogic Server
NFS
Samba base shell script shell script
Apache HTTP Server 2.0.52 gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 gcc 3.4.6 bash 3.0

10.xxxx.51 10.xxxx.55
10.xxxx.52.183 a.xxxx.t
Installs:
NFS Share
Samba

10.xxxx.53 10.xxxx.57
10.xxxx.52.185 a.xxxx.t
Installs:
IBM WAS 7.0.0

10.xxxx.56.106
10.xxxx.41
10.xxxx.8.196
10.xxxx.147.119
10.xxxx.147.104
10.xxxx.5.151
10.xxxx.0.177
10.xxxx.1.224
10.xxxx.104.14

Figure 5. Affinity groups. Once ALDM’s automatically generated dependency groups are verified by IBM architects and additional IT policy considerations are

factored in, affinity groups are created. Affinity groups help clients structure the data center migration process so that dependent assets are migrated together. Each group is color coded, based on various migration planning criteria.

Identifying the best candidates for virtualization
Understanding the resource utilization of individual assets is of paramount importance for IT organizations that have made the decision to virtualize or consolidate. ALDM provides a quick and easy way to collect the utilization and other data needed to help determine the best candidates for server virtualization or consolidation.

Resource utilization data is presented statistically and graphically. Tables show peak and mean utilization for each server, and graphs plot CPU, disk, memory and network utilization over a user-defined sampling period (Figure 6).
System administrators can use this and other ALDM data to determine which devices to virtualize, upgrade and sunset.

11

12

Reengineering IT discovery with analytics and visualization

overall view of infrastructure nodes and dependencies to a detailed view of a single node with the simplicity of touchscreen navigation.

ATLWSCTX - Utilization to Peak
120%
% of Maximum

100%
80%
60%
40%
20%
0%

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Hours

CPU % of Peak

MEM % Peak

% NET of Peak

Disk % of Peak

Figure 6. Resource utilization trending. System administrators can use

ALDM’s utilization data to identify underutilized servers and determine which servers to consolidate, virtualize and retire.

Extending insights with data visualization
Data visualization transforms ALDM’s infographics into interactive tools that IT architects can use to better understand the data center environment. Instead of relying solely on one-dimensional dependency maps that show assets and dependencies in intricate “spider charts,” data visualization delivers a multidimensional, navigable schema that makes it easier to understand the logical relationships between the assets.
Infinitely customizable filters enable IT architects to boil down large amounts of data into manageable chunks and drill down for server detail, including specific hardware attributes and installed middleware. By filtering out extraneous nodes, they can focus in on the nodes that are relevant to a specific objective (Figure 7). They can move quickly from a high-level,

ALDM’s data visualization capability operates on the Apple iPad and uses standard iPad navigation techniques. IT architects simply tap the screen and slide their fingers to navigate through the IT infrastructure. For example, tapping on a specific server node (represented as a dot in visualization schema) highlights and enlarges all of its attributes and dependencies while other non-related elements fade from view
(Figure 8).
This ability to shift fluidly from a panoramic view of the infrastructure to detail views of specific assets simplifies IT discovery dramatically. With a clear picture of each asset’s activity, usage level and importance in the context of the overall infrastructure, IT architects can make more informed decisions about the infrastructure. Migration architects, in particular, can retrieve a more precise view of configured dependencies and ensure that dependent applications and hardware will be brought online together, without incident.
In short, IT can execute infrastructure optimization initiatives with greater confidence.

The confluence of discovery analytics and human analysis
For all their capability distilling volumes of data into usable insights, discovery analytics are not a substitute for human analysis but rather a supplement to it. Peoples’ knowledge, reasoning and experience must be applied to validate ALDM’s machine-generated insights and to make independent decisions not possible with programmatic instructions and mathematic computation.

Data Center Services

13

Figure 7. Customizable filters. IT architects tap on the filter tiles on the left of the display to drill down to the desired level of server detail. In this example, filters have been used to hide the IT environment’s web, application and infrastructure servers. Only database servers are visible, with each dot representing a single DBMS server node and each line representing a dependency.

Consider ALDM’s resource utilization trending metrics, which identify potential servers for virtualization based on a series of aggregated calculations. Only individuals who understand the potential business ramifications can determine which of those servers should actually be virtualized or consolidated.
Likewise, while ALDM reveals platform inconsistencies across the data center, it cannot tell whether those platforms are still supported and, if so, for how long. So while machinegenerated analytics provide meaningful insights and highlight red flags, IBM works with companies to sort through the analytic results and determine whether action should be taken.

For all their capability distilling volumes of data into usable insights, discovery analytics are not a substitute for human analysis but rather a supplement to it.

14

Reengineering IT discovery with analytics and visualization

All of the discovered detail about node
‘pasdbs09’ is displayed in the sliding panel at right

Figure 8. Detail view of server attributes. Using standard iPad finger gestures, IT architects can enlarge the view to focus on specific servers (as shown in the center panel), then tap on a desired node to view its host and IP name, OS, middleware and hardware attributes (shown in the right panel).

IBM analysts review the output and prepare a formal report, then meet with companies to present issues of potential concern. The importance of these face-to-face discussions cannot be overstated. While companies are likely to recognize some of the more obvious problems and inconsistencies, others take a trained eye and years of experience to detect.
Take the case of virtualized development and production servers running on the same physical server. Most companies understand the security risk posed by such a conflict of interest but are often unaware of instances in their own environment.

Spotting the problem involves seeing production applications and compilers running together on the same machine— something that IBM analysts routinely look for and find, but which is otherwise not commonly recognized.
A typical ALDM implementation, including IBM’s insights analysis, resulting report and other deliverables, and summary discussion with clients, runs several weeks from start to finish.

Data Center Services

ALDM deliverables summary
ALDM analysis report • Infrastructure summary and analysis
• Description and criticality of risks, including legacy operating systems and middleware, redundant versions of middleware and orphan servers
• Remediation options for each risk
• Additional insights

Multilevel dependency mapping

Graphic server visualization of server-toserver dependency mapping, based on user criteria

Resource utilization trending Graphs and tables illustrating CPU, memory and disk utilization for each server, normalized for peak and average

Data visualization

Access to visualization capability for each server and the overall infrastructure, plus server-specific pictures in vector image format

ALDM data model

All data collected and parsed by ALDM presented in spreadsheet format

15

Conclusion
With so much riding on the efficiency and performance of the infrastructure, IT discovery is likely to become a standard element in the health regimen for today’s well-run operations.
Whether providing the basis for periodic inventory scrubs,
IT optimization or data center migration initiatives, discovery analytics have the potential to simplify the process while enabling a more comprehensive and accurate view of IT assets and dependencies.
ALDM accelerates the IT discovery process and signficantly lowers the cost, making it feasible to assess individual servers or the entire operation as often as required. Instead of limiting
IT discovery to one-off projects, it can become part of a company’s ongoing maintenance and efficiency program, facilitating the detection of configuration problems and streamlining inventory collection.
With the ability to read and analyze server configuration files,
ALDM can see dependency insights that other automated tools and manual inventorying processes cannot. And data visualization enables deeper understanding by bringing greater clarity to the results. For companies looking to drive
IT efficiency through consolidation, migration, IT resiliency optimization or other major transformation initiatives,
ALDM insights can be the linchpin to a seamless, disruptionfree transition.

16

Reengineering IT discovery with analytics and visualization

For more information
To learn how IBM is helping organizations improve IT discovery, please contact your IBM representative or
IBM Business Partner, or visit ibm.com/services/aldm © Copyright IBM Corporation 2013
IBM Global Services
Route 100
Somers, NY 10589
U.S.A.
Produced in the United States of America
January 2013
All Rights Reserved
IBM, the IBM logo and ibm.com are trademarks of International Business
Machines Corporation in the United States, other countries or both.
If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or TM), these symbols indicate U.S. registered or common law trademarks owned by
IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. Other product, company or service names may be trademarks or service marks of others. A current list of IBM trademarks is available on the web at
“Copyright and trademark information” at ibm.com/legal/copytrade.shtml This document is current as of the initial date of publication and may be changed by IBM at any time.
Not all offerings are available in every country in which IBM operates.
The performance data discussed herein is presented as derived under specific operating conditions. Actual results may vary. It is the user’s responsibility to evaluate and verify the operation of any other products or programs with IBM products and programs.
THE INFORMATION IN THIS DOCUMENT IS PROVIDED
“AS IS” WITHOUT ANY WARRANTY, EXPRESS OR
IMPLIED, INCLUDING WITHOUT ANY WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND ANY WARRANTY OR CONDITION OF NONINFRINGEMENT. IBM products are warranted according to the terms and conditions of the agreements under which they are provided.
1
Data center operational efficiency best practices: Enabling increased new project spending by improving data center efficiency, IBM, April 2012.
2…...

Similar Documents

Premium Essay

Server

...production of the infamous Microsoft software company? Windows Server 2008 is based off the Windows Vista codes which makes it more user-friendly and more universally known (Wikipedia.org/windowsserver2008). February 27, 2008 brought about a new meaning for many businesses both large and small. Feature Enterprise Datacenter Standard Web Itanium Web Agent Yes Yes Yes No No Sever Backup Yes Yes Yes Yes Yes Power Shell Yes Yes Yes Yes Yes Admin Pack Yes Yes Yes Yes No Tcp/Ip Servics Yes Yes Yes Yes Yes Windows Server 2008 comes in different versions for numerous reasons. Windows server 2008, also known as “Win2K8” or “W2K8”, has many versions to help fit the needs of the consumer/client that is using the system. They make different versions, which contain different features, to accommodate the usefulness to the different user. For example, a large business, with large profits, would be willing to pay more for a certain version, since the server would be able to accommodate thousands of users and certain supportive systems for numerous processors. (technet.com) The following are some new features and/or enhancements made to Windows Server 2008:  Domain Name System (DNS) enhancements o Background zone loading o IPv6 Support o Support for read-only domain controllers (RODCs) o Global single names  Quality of Service o New facilities to manage network traffic for both enterprises and home networks.  Server Message Block 2.0  Http.sys Enhancements The research......

Words: 743 - Pages: 3

Premium Essay

Business Analytics

...their own destiny. The Approach The Orlando Magic made a decision to use business analytics to help them drive profitability and autonomy. Anthony Perez, Director of Business Strategy put in place a business analytics team that spent two years honing their skills on the business side of basketball. When the Magic's business analytics team got started in 2010, they grossly miscalculated the time it would take to prepare the data. "We didn't set the right expectations. All of us were thinking that it would be easier than it was," Perez said. Pulling together data from Ticketmaster, concession vendors and other business partners into a data warehouse took much longer than anticipated. "We went almost the entire season without a fully functional data warehouse. The biggest thing we learned was that this really requires patience," he said. Perez said that eighteen to 20 months earlier, they knew virtually nothing about predictive analytics. His team was in fact working on predictive analytics well before that, but their tools weren't powerful enough to give them insights they needed, and the group needed to scale up its efforts. Perez brought in new, more powerful software from SAS and began the journey to profitability and autonomy. The Partner The SAS team introduced the Magic’s business analytics team to the following nine best practices to ensure a successful journey into predictive analytics: 1. Define the business proposition. What is the business problem you are......

Words: 1736 - Pages: 7

Premium Essay

It Server

...Shawn Little | slittle | Sales-dsktp-3 | 200.7.0.243 | Sales: | Sales | Torey Dozier | tdozier | Sales-dsktp-4 | 200.7.0.242 | Sales: | Sales | Ian Fraiser | ifraiser | Sales-dsktp-5 | 200.7.0.241 | Sales: | Sales | Brad Pledger | bpledger | Sales-dsktp-6 | 200.7.0.240 | Sales: | Sales | Will Vernon | wvernon | Sales-dsktp-7 | 200.7.0.239 | Sales: | Sales | Lindsay Pope | lpope | Sales-dsktp-8 | 200.7.0.238 | Sales: | Sales | Brad Clark | bclark | Sales-dsktp-9 | 200.7.0.237 | Sales: | Sales | Rick Onorato | ronorato | Sales-dsktp-10 | 200.7.0.236 | Human Resources: | HR | Jon Manger | jmanger | HR-dsktp-1 | 200.7.0.247 | Management | MGMT | James Bond | jbond | MGMT-dsktp-1 | 200.7.0.248 | Network: | NTWRK | Server | dhannahs | Server-1 | 200.7.0.254 | Network: | NTWRK | Switch | | Switch-1 | 200.7.0.253 | Network: | NTWRK | Router | | Router-1 | 200.7.0.252 | Network: | NTWRK | WAP | | WAP-1 | 200.7.0.251 | Network: | NTWRK | B&W Printer | | Bwprinter-1 | 200.7.0.250 | Network: | NTWRK | Color Printer | | Colorprinter-1 | 200.7.0.249 |...

Words: 295 - Pages: 2

Free Essay

Analytic

...Harlene Santos ------------------------------------------------- ------------------------------------------------- ------------------------------------------------- Analytic geometry From Wikipedia, the free encyclopedia Analytic geometry, or analytical geometry, has two different meanings in mathematics. The modern and advanced meaning refers to the geometry of analytic varieties. This article focuses on the classical and elementary meaning. In classical mathematics, analytic geometry, also known as coordinate geometry, or Cartesian geometry, is the study of geometry using a coordinate system and the principles of algebra and analysis. This contrasts with the synthetic approach of Euclidean geometry, which treats certain geometric notions as primitive, and usesdeductive reasoning based on axioms and theorems to derive truth. Analytic geometry is widely used in physicsand engineering, and is the foundation of most modern fields of geometry, including algebraic, differential, discrete, and computational geometry. Usually the Cartesian coordinate system is applied to manipulate equations for planes, straight lines, andsquares, often in two and sometimes in three dimensions. Geometrically, one studies the Euclidean plane (2 dimensions) and Euclidean space (3 dimensions). As taught in school books, analytic geometry can be explained more simply: it is concerned with defining and representing geometrical shapes in a numerical way and extracting numerical information......

Words: 5082 - Pages: 21

Free Essay

Servers

...own companies. Online banking make life simpler. You can check your account balance, deposit checks, pay bills, and make transfers. In order to make web applications more secure and less vulnerable, there are three top areas of risk to a company that wants to expand their systems web servers, databases server, and file servers. The web servers are applications that make information available on the internet. Web servers protect client information, client logins and passwords, and other client information that is not meant to be viewed by the public. Securing the database servers will keep persons or employees from accessing account holder contact information or changing account balances. Database servers are used by most companies. An unsecured database can have excessive and unused privileges. Keeping a database server secured will increase customer satisfaction and peace of mind. File servers are designed to keep out online threats from your Microsoft Windows documents safe and secure. Securing the file server would deny employees access from changing or viewing loan applications and other personal data to inflict damage. In closing, the web servers, database servers, and file servers are all designed to make customers feel at ease about online banking. They were built to keep hackers or identify thieves outside or inside a company from accessing information that is private. References www.imperva.com/docs/wp_TopTen_Database_Threats.pdf Unit 1 Discussion 1:......

Words: 262 - Pages: 2

Premium Essay

Big Data and Analytics Developer

...Ahmed Mansour Big Data and Analytics Developer at OMS ahmedelmasry_60311@hotmail.com Summary Working in Big Data & Analytics (2014 - Present). Working in Business Intelligence (IBM Cognos) (2013 - Present). Working in ERP & Data manipulation (Oracle & Asp.net) (2011 - 2013). Skills (Pivotal HD (Hadoop),Oracle, Sql Server, MongoDB, Asp.net, JavaScript, Node.js, C#). Training (Pivotal HD Hadoop training). Master's Degree in Informatics at Nile University (2014-2016) Graduated from Faculty of Science, Cairo University (2011). Awarded (YIA) The Young Innovator Award (2010). Experience Big Data and Analytics Developer at OMS April 2015 - Present (1 month) Developing and analysis Big Data using Hadoop framework (Pivotal HD & Hawq), Hadoop Eco-System Co-Founder and Data Analyst at AlliSootak September 2010 - Present (4 years 8 months) Developing and Researcher Senior Software Developer at Fifth Dimension (5d) October 2014 - April 2015 (7 months) Senior Software Developer at Bizware August 2013 - October 2014 (1 year 3 months) Developing 2 recommendations available upon request Director of Special Projects at CIT Support May 2012 - January 2014 (1 year 9 months) Ensure that the client's requirements are met, the project is completed on time and within budget and that everyone else is doing their job properly. Senior Software Developer at I-Axiom Cloud ERP Solutions November 2011 - August 2013 (1 year 10 months) Developing Certifications The Data......

Words: 840 - Pages: 4

Premium Essay

Analytics

...INTRODUCTION TO BUSINESS ANALYTICS Sumeet Gupta Associate Professor Indian Institute of Management Raipur Outline •  Business Analytics and its Applications •  Analytics using Data Mining Techniques •  Working with R BUSINESS ANALYTICS AND ITS APPLICATIONS What is Business Analytics? Analytics is the use of: data, information technology, statistical analysis, quantitative methods, and mathematical or computer-based models to help managers gain improved insight about their business operations and make better, fact-based decisions. Evolution of Business Analytics? •  Operations research •  Management science •  Business intelligence •  Decision support systems •  Personal computer software Application Areas of Business Analytics •  Management of customer relationships •  Financial and marketing activities •  Supply chain management •  Human resource planning •  Pricing decisions •  Sport team game strategies Why Business Analytics? •  There is a strong relationship of BA with: •  profitability of businesses •  revenue of businesses •  shareholder return •  BA enhances understanding of data •  BA is vital for businesses to remain competitive •  BA enables creation of informative reports Global Warming Poll Winner Sales Revenue Predicting Customer Churn Credit Card Fraud Loan Default Prediction Managing Employee Retention Market Segmentation Medical Imaging Analyzing Tweets stylus ...

Words: 952 - Pages: 4

Premium Essay

Analytics

...Analytics Concepts and Definitions Types of Analytics Descriptive Analytics: * Post Event Analytics * Add features to website and measure its effectiveness in form of clicks, link sharing, page views * Descriptive Analytics Tools -> Google Analytics, Optimizely Diagnostic Analytics: * Post Event Analytics * Analytics used to diagnose why something/phenomenon happened the way it did * It basically provides a very good understanding of a limited piece of the problem you want to solve. * Usually less than 10% of companies surveyed do this on occasion and less than 5% do so consistently. Predictive Analytics: * Used for Prediction of Phenomenon using past and current data statistics * Essentially, you can predict what will happen if you keep things as they are. * However, less than 1% of companies surveyed have tried this yet. The ones who have, found incredible results that have already made a big difference in their business. * Eg:- SAS, RapidMiner, Statistica Prescriptive Analytics:  * Prescriptive analytics automatically synthesizes big data, multiple disciplines of mathematical sciences and computational sciences, and business rules, to make predictions and then suggests decision options to take advantage of the predictions. * It is considered final phase of Analytics Some Analytics Techniques used Linear Regression In statistics, linear regression is an approach for modeling the relationship between a......

Words: 1288 - Pages: 6

Free Essay

Analytics

...Introduction to Analytics Hal Hagood u01a1 The article used was found on Forbes and reports how UPS (United Parcel Service) uses predictive analytics to replace routine maintenance. It addresses a problem that UPS, one of the largest logistics operations in the world faces constantly as they deliver millions of packages every day, a feat which is a small miracle in and of itself. If even one of the trucks in their fleet has so much as even a minor breakdown, it can be a big problem with unpleasant consequences. This can result in driver downtime, late packages and angry customers. The data analytics solution used was that of predictive analytics. United Parcel Service, Inc. (UPS) is the world's largest package delivery company and a provider of supply chain management solutions. It is a global logistics company headquartered in Sandy Springs, Georgia, which is part of the Greater Atlanta metropolitan area. UPS delivers more than 15 million packages a day to more than 6.1 million customers in more than 220 countries and territories around the world (UPS, 2015). The challenges associated with this problem and the information that required analysis concerned maintenance of its fleet. In the past UPS used to replace important parts every few years. This was the solution they used to ensure that its vehicles stayed on the road and in good working order. The new approach however, is to collect data from hundreds of sensors in each vehicle. They then use various algorithms...

Words: 769 - Pages: 4

Premium Essay

The New Frontier: Data Analytics

...Assignment 1: The New Frontier: Data Analytics xxxxxx Professor xxxx CIS500: Information System Decision Making April 17, 20xx Strayer University The New Frontier: Data Analytics Abstract The word “tweet” was first defined as “a chirping noise” whose origin dated back to 1768. Since 2011, Merriam Webster dictionary extends that definition to mean “a post made on the Twitter online messaging service”. Mention of the adoption of “tweet” into the Merriam Webster Dictionary is designed to illustrate two main points; that information can be ambiguous and that technology can reweave the very fabric of human culture. According to research done by Zikopoulos, Eaton, Deroos, Deutsch, & Lapis (2012), there exists 800,000 petabytes (PB) of data stored in the world in the year 2000. By their estimates, that number could reach 35 zettabytes (ZB) by the year 2020 (p. 39). The ability to analyze and process the enormous amount of data is a costly undertaking for companies that are behind the curve and a lucrative business for those that are ahead of the game. Each tweet and post contribution from the users that share the web space further buries the proverbial haystack. It is the ability to sift through the data that determines whether a company can gain traction in their respective industry or if they are simply spinning their wheels. This research paper, centered on Capital Cube and their parent company analytixinsight, will aim to discuss how data analytics is paramount to......

Words: 2908 - Pages: 12

Premium Essay

Big Data and Data Analytics

...Big Data and Data Analytics for Managers Q1. What is meant by Big Data? How is it characterized? Give examples of Big Data. Ans. Big data applies to information that can’t be processed or analysed using traditional processes or tools or software techniques. The data which is massive in volume and can be both structured or unstructured data. Though, it is a bit challenging for enterprises to handle such huge amount fast moving data or one which exceeds the current processing capacity, still there lies a great potential to help companies to take faster and intelligent decisions and improve operations. There are three characteristics that define big data, which are: 1. Volume 2. Velocity 3. Variety * Volume: The volume of data under analysis is large. Many factors contribute to the increase in data volume, for example, * Transaction-based data stored through the years. * Unstructured data streaming in social media. * Such data are bank data (details of the bank account holders) or data in e-commerce wherein customers data is required for a transaction. Earlier there used to data storage issues, but with big data analytics this problem has been solved. Big data stores data in clusters across machines, also helping the user on how to access and analyse that data. * Velocity: Data is streaming in at unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal......

Words: 973 - Pages: 4

Premium Essay

Analytics

...Spotlight on Making Your Company Data-Friendly Spotlight 64 Harvard Business Review December 2013   Artwork Chad Hagen Nonsensical Infographic No. 5 2009, digital hbr.org Analytics 3.0 In the new era, big data will power consumer products and services. by Thomas H. Davenport T hose of us who have spent years studying “data smart” companies believe we’ve already lived through two eras in the use of analytics. We might call them BBD and ABD—before big data and after big data. Or, to use a naming convention matched to the topic, we might say that Analytics 1.0 was followed by Analytics 2.0. Generally speaking, 2.0 releases don’t just add some bells and whistles or make minor performance tweaks. In contrast to, say, a 1.1 version, a 2.0 product is a more substantial overhaul based on new priorities and technical possibilities. When large numbers of companies began capitalizing on vast new sources of unstructured, fast-moving information—big data—that was surely the case. Some of us now perceive another shift, fundamental and farreaching enough that we can fairly call it Analytics 3.0. Briefly, it is a new resolve to apply powerful data-gathering and analysis December 2013 Harvard Business Review 65 Spotlight on Making Your Company Data-Friendly methods not just to a company’s operations but also to its offerings—to embed data smartness into the products and services customers buy. I’ll develop this argument in what follows,......

Words: 4608 - Pages: 19

Free Essay

Analytics

...Download Infographic: Must Read Books in Data Science / Analyt… Resources - Data Science, Analytics and Big Data discussions Home Blog Jobs Trainings Learning Paths 21/07/15 8:48 pm j ADVERTISEMENT Download Infographic: Must Read Books in Data Science / Analytics books data_science datavisualization Manish ! Data Hackers 28d Hey there ! You can think of this infographic as an ideal list of books to have in bookshelf of every data scientist / analyst. These books cover a wide range of topics and perspective (not only technical knowledge), which should help you become a well rounded data scientist. If you have other suggestions, please feel free to add them below: Books related to data science decisioning: When Genius Failed: The Rise and fall of Long-Term Capital Management A fast paced thriller, this book not only brings out how you can compete on data based decisions, but also why you need to keep human behavior in mind while taking decisions on data. Scoring Points: How Tesco Continues to Win Customer Loyalty this book brings out some of the practical challenges faced by Tesco and how they overcame them to create one of the biggest success story of customer loyalty. The Signal and the Noise: The Art and Science of Prediction . From the stock market to the poker table, from earthquakes to the economy, Nate Silver takes us on an enthralling insider’s tour of the high-stakes world of forecasting, showing how we can use......

Words: 1265 - Pages: 6

Premium Essay

Server

...Research Assignment 1 1. The two current versions that are released for Windows Server 2008 are Service pack 2 and R2. The difference between the two versions is that Windows R2 includes enhancements and new functionality for Active Directory, new Virtualization and Management features that the original version does not have. 2. The difference between Windows Server 2003 and 2008 is that the 2008 version has a third party updated device driver and it consumes less power and it has additional feature such as virtualization. Also, 2003 is made to control WinXP networks and 2003 is made to control Vista networks. 3. Having the 64-bit architecture doubles the amount of data a CPU can process per cycle. 4. Server Core is a minimal server installation option for computers running on the Windows Server 2008 operating system or later. Server Core provides a low-maintenance server environment with limited functionality. Windows Server virtualization provides the service that you can use to create and manage virtual machines and their resources. Powershell is an installable feature of Windows Server 2008. You have access to an amazing command line scripting language. Unlike other scripting languages in Windows, Powershell is designed just for us system administrators. Powershell uses .NET and utilizes “cmdlets” (or “command-lets”) to do its job. 5. An RODC makes it possible for organizations to easily deploy a domain controller in scenarios where physical......

Words: 269 - Pages: 2

Premium Essay

Business Analytics

...1.0 Introduction Business analytics (BA) is the practice of iterative, methodical exploration of an organization’s data with emphasis on statistical analysis.  It describes the skills, technologies, practices for continuous iterative exploration and investigation of past business performance to gain insight and drive business planning. Business analytics is used by companies committed to data-driven decision making.  It focuses on developing new insights and understanding of business performance based on data and statistical methods. BA is used to gain insights that inform business decisions and can be used to automate and optimize business processes. Business analytics makes extensive use of statistical analysis, including explanatory and predictive modeling, and fact-based management to drive decision making. It is therefore closely related to management science. Analytics may be used as input for human decisions or may drive fully automated decisions. Data-driven companies treat their data as a corporate asset and leverage it for competitive advantage. Successful business analytics depends on data quality, skilled analysts who understand the technologies and the business and an organizational commitment to data-driven decision making. Once the business goal of the analysis is determined, an analysis methodology is selected and data is acquired to support the analysis.  Data acquisition often involves extraction from one or more business systems, cleansing, and integration...

Words: 4604 - Pages: 19