Thursday, August 30, 2007

Security Metrics

The pressure is on. Various surveys indicate that over the past several years computer
security has risen in priority for many organizations. Spending on IT security has
increased significantly in certain sectors -– four-fold since 2001 within the federal
government alone.1 As with most concerns that achieve high priority status with
executives, computer security is increasingly becoming a focal point not only for
investment, but also for scrutiny of return on that investment. In the face of regular,
high-profile news reports of serious security breaches, security managers are more than
ever before being held accountable for demonstrating effectiveness of their security
programs.
What means should managers be using to meet this challenge? Some experts believe
that key among these should be security metrics.2 This guide provides a definition of
security metrics, explains their value, discusses the difficulties in generating them, and
suggests a methodology for building a security metrics program.

Definition of Security Metrics
It helps to understand what metrics are by drawing a distinction between metrics and
measurements. Measurements provide single-point-in-time views of specific, discrete
factors, while metrics are derived by comparing to a predetermined baseline two or
more measurements taken over time.3 Measurements are generated by counting;
metrics are generated from analysis.4 In other words, measurements are objective raw
data and metrics are either objective or subjective human interpretations of those data.
Good metrics are those that are SMART, i.e. specific, measurable, attainable, repeatable,
and time-dependent, according to George Jelen of the International Systems Security
Engineering Association.5 Truly useful metrics indicate the degree to which security
goals, such as data confidentiality, are being met, and they drive actions taken to
improve an organization’s overall security program.

A Good Metric Must:
1. Be consistently measured. The criteria must be objective and repeatable.
2. Be cheap to gather. Using automated tools (such as scanning software or
password crackers) helps.
3. Contain units of measure. Time, dollars or some numerical scale should be included—not just, say, "green," "yellow" or "red" risks.
4. Be expressed as a number. Give the results as a percentage, ratio or some other kind of actual measurement. Don't give subjective opinions such as "low risk" or "high priority."
Source: Andrew Jaquith
A Good Visualization of Metrics Will:
Not be oversimplified. Executives can handle complex data if it's presented clearly.
At the same time, not be ornate. Gratuitous pictures, 3-D bars, florid design and noise around the data diminish effectiveness.
Use a consistent scale. Switching scales within a single graphic presentation makes it confusing or suggests you're trying to bend the facts.
Include a comparison to a benchmark, where applicable. "You are here" or "The industry is here" is often a simple but informative comparative element to add.

By no means does Jaquith (or CSO for that matter) think these five metrics are the final word on infosecurity. Quite the contrary, they're a starting point, relatively easy to ascertain and hopefully smart enough to get CISOs thinking about finding other metrics like these, out in the vast fields of data, waiting to be reaped.

Metric 1: Baseline Defenses Coverage (Antivirus, Antispyware, Firewall, and so on)

This is a measurement of how well you are protecting your enterprise against the most basic information security threats. Your coverage of devices by these security tools should be in the range of 94 percent to 98 percent. Less than 90 percent coverage may be cause for concern. You can repeat the network scan at regular intervals to see if coverage is slipping or holding steady. If in one quarter you've got 96 percent antivirus coverage, and it's 91 percent two quarters later, you may need more formalized protocols for introducing devices to the network or a better way to introduce defenses to devices. In some cases, a drop may stir you to think about working with IT to centralize and unify the process by which devices and security software are introduced to the network. An added benefit: By looking at security coverage, you're also auditing your network and most likely discovering devices the network doesn't know about. "At any given time, your network management software doesn't know about 30 percent of the IP addresses on your network," says Jaquith, because either they were brought online ad hoc or they're transient.
How to get it: Run network scans and canvass departments to find as many devices and their network IP addresses as you can. Then check those devices' IP addresses against the IP addresses in the log files of your antivirus, antispyware, IDS, firewall and other security products to find out how many IP addresses aren't covered by your basic defenses.
Expressed as: Usually a percentage. (For example, 88 percent coverage of devices by antivirus software, 71 percent coverage of devices by antispyware and so forth.)
Not good for: Shouldn't be used for answering the question "How secure am I?" Maximum coverage, while an important baseline, is too narrow in scope to give any sort of overall idea of your security profile. Also, probably not yet ready to include cell phones, BlackBerrys and other personal devices, because those devices are often transient and not always the property of the company, even if they connect to the company.
Try these advanced versions: You can parse coverage percentages according to several secondary variables. For example, percentage coverage by class of device (for instance, 98 percent antivirus coverage of desktops, 87 percent of servers) or by business unit or geography (for instance, 92 percent antispyware coverage of desktops in operations, 83 percent of desktops in marketing) will help uncover tendencies of certain types of infrastructure, people or offices to miss security coverage. In addition, it's a good idea to add a time variable: Average age of antivirus definitions (or antispyware or firewall rules and so on). That is, 98 percent antivirus coverage of manufacturing servers is useless if the average age of the virus definitions on manufacturing's servers is 335 days. A star company, Jaquith says, will have 95 percent of their desktops covered by antivirus software with virus definitions less than three days old.
One possible visualization: Baseline defenses can be effectively presented with a "you are here" (YAH) graphic. A YAH needs a benchmark—in this case it's the company's overall coverage. After that, a business unit, geography or other variable can be plotted against the benchmark. This creates an easy-to-see graph of who or what is close to "normal" and will suggest where most attention needs to go. YAHs are an essential benchmarking tool. The word "you" should appear many times on one graphic. Remember, executives aren't scared of complexity as long as it's clear. Here's an example: plotting the percentages of five business units' antivirus and antispyware coverage and the time of their last update against a companywide benchmark.

Metric 2: Patch Latency

Patch latency is the time between a patch's release and your successful deployment of that patch. This is an indicator of a company's patching discipline and ability to react to exploits, "especially in widely distributed companies with many business units," according to Jaquith. As with basic coverage metrics, patch latency stats may show machines with lots of missing patches or machines with outdated patches, which might point to the need for centralized patch management or process improvements. At any rate, through accurate patch latency mapping, you can discover the proverbial low-hanging fruit by identifying the machines that might be the most vulnerable to attack.
How to get it: Run a patch management scan on all devices to discover which patches are missing from each machine. Cross-reference those missing patches with a patch clearinghouse service and obtain data on 1. the criticality of each missing patch and 2. when the patches were introduced, to determine how long each missing patch has been available.
Expressed as: Averages. (For example, servers averaged four missing patches per machine. Missing patches on desktops were on average 25 days old.)
Not good for: Companies in the middle of regression testing of patch packages, such as the ones Microsoft releases one Tuesday every month. You should wait to measure patch latency until after regression testing is done and take into account the time testing requires when plotting the information. The metrics might also get skewed by mission-critical systems that have low exposure to the outside world and run so well that you don't patch them for fear of disrupting ops. "There are lots of systems not really open to attack where you say, ‘It runs, don't touch it,'" says Jaquith. "You'll have to make a value judgment [on patch latency] in those cases."
Try these advanced metrics: As with baseline coverage, you can analyze patch latency by business unit, geography or class of device. Another interesting way to look at patch latency statistics is to match your average latency to the average latency of exploits. Say your production servers average 36 days on missing patches' latency, but similar exploits were launched an average of 22 days after a patch was made available. Well, then you have a problem. One other potentially useful way to approach patch latency is to map a patch to its percent coverage over time. Take any important patch and determine its coverage across your network after one day, three days, five days, 10 days and so on.
One possible visualization: For data where you can sum up the results, such as total number of missing patches, a "small multiples" graphic works well. With small multiples you present the overall findings (the whole) as a bar to the left. To the right, you place bars that are pieces making up the whole bar on the left. This presentation will downplay the overall findings in favor of the individual pieces. One key in small multiples graphing is to keep the scale consistent between the whole and the parts. This example plots total number of missing patches for the top and bottom quartiles of devices (the best and worst performers). Then it breaks down by business unit who's contributing to the missing patches.

Metric 3: Password Strength
This metric offers simple risk reduction by sifting out bad passwords and making them harder to break, and finding potential weak spots where key systems use default passwords. Password cracking can also be a powerful demonstration tool with executives who themselves have weak passwords. By demonstrating to them in person how quickly you can break their password, you will improve your lines of communication with them and their understanding of your role.
How to get it: Using commonly available password cracking programs, attempt to break into systems with weak passwords. Go about this methodically, first attacking desktops, then servers or admin systems. Or go by business unit. You should classify your devices and spend more time attempting to break the passwords to the more important systems. "If it's a game of capture the flag," Jaquith says, "the flag is with the domain controller, so you want stronger access control there, obviously."
Expressed as: Length of time or average length of time required to break passwords. (For example, admin systems averaged 12 hours to crack.) Can be combined with a percentage for a workgroup view (for example, 20 percent of accounts in business unit cracked in less than 10 minutes). Is your password subject to a lunchtime attack? That is, can it be cracked in the 45 minutes you are away from your desk to nosh?
Not good for: User admonishment, judgment. The point of this exercise is not to punish offending users, but to improve your security. Skip the public floggings and just quietly make sure employees stop using their mother's maiden name for access.
Try this: Use password cracking as an awareness-program audit tool. Set up two groups (maybe business units). Give one group password training. The other group is a control; it doesn't get training. After several months and password resets, try to crack the passwords in both groups to see if the training led to better passwords.
One possible visualization: Both YAH and small multiples graphics could work with this metric. (See the graphics for Metric 1 and Metric 2.)

Metric 4: Platform Compliance Scores

Widely available tools, such as the Center for Internet Security (CIS) scoring toolset, can run tests against systems to find out if your hardware meets best-practice standards such as those set by CIS. The software tools take minutes to run, and test such things as whether ports are left unnecessarily open, machines are indiscriminately shared, default permissions are left on, and other basic but often overlooked security lapses. The scoring system is usually simple, and given how quickly the assessments run, CISOs can in short order get a good picture of how "hardened" their hardware is by business unit, by location or by any other variable they please.
Expressed as: Usually a score from 0 to 10, with 10 being the best. Best-in-class, hardened workstations score a 9 or a 10, according to Jaquith. He says this metric is far more rigorous than standard questionnaires that ask if you're using antivirus software or not. "I ran the benchmark against the default build of a machine with Windows XP Service Pack 2, a personal firewall and antivirus protection, and it scored a zero!" Jaquith notes.
Not good for: Auditing, comprehensive risk assessment or penetration testing. While a benchmark like this may be used to support those advanced security functions, it shouldn't replace them. But if you conduct a penetration test after you've benchmarked yourself, chances are the pen test will go more smoothly.
Try this: Use benchmarking in hardware procurement or integration services negotiations, demanding configurations that meet some minimum score. Also demand baseline scores from partners or others who connect to your network.
One possible visualization: An overall score here is simple to do: It's a number between 1 and 10. To supplement that, consider a tree map. Tree maps use color and space in a field to show "hot spots" and "cool spots" in your data. They are not meant for precision; rather they're a streamlined way to present complex data. They're "moody." They give you a feel for where your problems are most intense. In the case of platform-compliance scores, for instance, you could map the different elements of your benchmark test and assign each element a color based on how risky it is and a size based on how often it was left exposed. Be warned, tree maps are not easy to do. But when done right, they can have instant visual impact.

Metric 5: Legitimate E-Mail Traffic Analysis

Legitimate e-mail traffic analysis is a family of metrics including incoming and outgoing traffic volume, incoming and outgoing traffic size, and traffic flow between your company and others. There are any number of ways to parse this data; mapping the communication flow between your company and your competitors may alert you to an employee divulging intellectual property, for example. The fascination to this point has been with comparing the amount of good and junk e-mail that companies are receiving (typically it's about 20 percent good and 80 percent junk). Such metrics can be disturbing, but Jaquith argues they're also relatively useless. By monitoring legitimate e-mail flow over time, you can learn where to set alarm points. At least one financial services company has benchmarked its e-mail flow to the point that it knows to flag traffic when e-mail size exceeds several megabytes and when a certain number go out in a certain span of time.
How to get it: First shed all the spam and other junk e-mail from the population of e-mails that you intend to analyze. Then parse the legitimate e-mails every which way you can.
Not good for: Employee monitoring. Content surveillance is a different beast. In certain cases you may flag questionable content or monitor for it, if there's a previous reason to do this, but traffic analysis metrics aren't concerned with content except as it's related to the size of e-mails. A spike in large e-mails leaving the company and flowing to competitors may signal IP theft.
Added benefit: An investigations group can watch e-mail flow during an open investigation, say, when IP theft is suspected.
Try this: Monitor legitimate e-mail flow over time. CISOs can actually begin to predict the size and shape of spikes in traffic flow by correlating them with events such as an earnings conference call. You can also mine data after unexpected events to see how they affect traffic and then alter security plans to best address those changes in e-mail flow.
One possible visualization: Traffic analysis is suited well to a time series graphic. Time series simply means that the X axis delineates some unit of time over which something happens. In this case, you could map the number of e-mails sent and their average size (by varying the thickness of your bar) over, say, three months. As with any time line, explain spikes, dips or other aberrations with events that correlate to them.
Metric 6: Application Risk Index
How to get it: Build a risk indexing tool to measure risks in your top business applications. The tool should ask questions about the risks in the application, with certain answers corresponding to a certain risk value. Those risks are added together to create an overall risk score.
Expressed as: A score, or temperature, or other scale for which the higher the number, the higher the exposure to risk. Could also be a series of scores for different areas of risk (for example, business impact score of 10 out of 16, compliance score of 3 out of 16, and other risks score of 7 out of 16).
Industry benchmark: None exist. Even though the scores will be based on observable facts about your applications (such as, is it customer facing? Does it include identity management? Is it subject to regulatory review?). This is the most subjective metric on the list, because you or someone else puts the initial values on the risks in the survey instrument. For example, it might be a fact that your application is customer-facing, but does that merit two risk points or four?
Good for: Prioritizing your plans for reducing risk in key applications—homegrown or commercial. By scoring all of your top applications with a consistent set of criteria, you’ll be able to see where the most risk lies and make decisions on what risks to mitigate.
Not good for: Actuarial or legal action. The point of this exercise is for internal use only as a way to gauge your risks, but the results are probably not scientific enough to help set insurance rates or defend yourself in court.
Added benefit: A simple index like this is a good way to introduce risk analysis into information security (if it’s not already used) because it follows the principles of risk management without getting too deeply into statistics.
Try this: With your industry consortia, set up an industrywide group to use the same scorecard and create industrywide application risk benchmarks to share (confidentially, of course). One industry can reduce risk for everyone in the sector by comparing risk profiles on similar tools. (Everyone in retail, for example, uses retail point-of-sale systems and faces similar application risks.)
One possible visualization: Two-by-two grids could be used here to map your applications and help suggest a course of action. Two-by-twos break risk and impact into four quadrants: low risk/low impact, low risk/high impact, high risk/low impact, high risk/high impact. A good way to use these familiar boxes is to label each box with a course of action and then plot your data in the boxes. What you’re doing is facilitating decision-making by constraining the number of possible courses of action to four. If you need to get things done, use two-by-two grids to push executives into decision making.

Saturday, August 25, 2007

Books for IT Managers

Here’s a list of some of the books that have resonated particularly well with me. I have learned a lot from each of these books, and I highly recommend them to all sorts of IT Managers, team leaders, and project management professionals.

Books about IT, but not a specific technology:

* Becomming a Technical Leader: An Organic Problem Solving Aproach
* Rapid Development
* Code Complete
* The Practice of System and Network Administration
* Peopleware
* The Pragmatic Programmer

Books about Lean and the Toyota Production System

* Lean Software Development
* Lean Thinking
* Product Development for the Lean Enterprise: Why Toyota’s System Is Four Times More Productive and How You Can Implement It

Books about Human Relationships

* Crucial Conversations
* Getting to Yes
* Influence: Science and Practice

Books about organization/time management

* Getting Things Done
* Organizing from the Inside Out

Monday, August 20, 2007

Cobit

COBIT

Overview

COBIT was first released in 1996. Its mission is “to research, develop, publicize and promote an authoritative, up-to-date, international set of generally accepted information technology control objectives for day-to-day use by business managers and auditors.” Managers, auditors, and users benefit from the development of COBIT because it helps them understand their IT systems and decide the level of security and control that is necessary to protect their companies’ assets through the development of an IT governance model.

COBIT has 34 high level processes that cover 318 control objectives categorized in four domains: Planning and Organization, Acquisition and Implementation, Delivery and Support and Monitoring. COBIT provides benefits to managers, IT users, and auditors. Managers benefit from COBIT because it provides them with a foundation upon which IT related decisions and investments can be based. Decision making is more effective because COBIT aids management in defining a strategic IT plan, defining the information architecture, acquiring the necessary IT hardware and software to execute an IT strategy, ensuring continuous service, and monitoring the performance of the IT system. IT users benefit from COBIT because of the assurance provided to them by COBIT’s defined controls, security, and process governance. COBIT benefits auditors because it helps them identify IT control issues within a company’s IT infrastructure. It also helps them corroborate their audit findings.

COBIT product family

The complete COBIT package is a set consisting of six publications:

  • Executive Summary
  • Framework
  • Control Objectives
  • Audit Guidelines
  • Implementation Tool Set
  • Management Guidelines

A brief overview of each of the above components is provided below.

Executive Summary

Sound business decisions are based on timely, relevant and concise information. Specifically designed for time-pressed senior executives and managers, the COBIT Executive Summary consists of an Executive Overview which provides a thorough awareness and understanding of COBIT's key concepts and principles. Also included is a synopsis of the Framework, which provides a more detailed understanding of these concepts and principles, while identifying COBIT's four domains (Planning and Organization, Acquisition and Implementation, Delivery and Support, Monitoring) and 34 IT processes

Control Objectives for Information and related Technology (COBIT®) provides good practices across a domain and process

framework and presents activities in a manageable and logical structure. COBIT’s good practices represent the consensus of experts.

They are strongly focused more on control, less on execution. These practices will help optimise IT-enabled investments, ensure

service delivery and provide a measure against which to judge when things do go wrong.

For IT to be successful in delivering against business requirements, management should put an internal control system or framework

in place. The COBIT control framework contributes to these needs by:

• Making a link to the business requirements

• Organising IT activities into a generally accepted process model

• Identifying the major IT resources to be leveraged

• Defining the management control objectives to be considered

Thus, COBIT supports IT governance (figure 2) by providing a framework to ensure that:

• IT is aligned with the business

• IT enables the business and maximises benefits

• IT resources are used responsibly

• IT risks are managed appropriately

The COBIT products have been organised

into three levels (figure 3) designed to

support:

• Executive management and boards

• Business and IT management

• Governance, assurance, control and

security professionals

Briefly, the COBIT products include:

Board Briefing on IT Governance,

2nd Edition—Helps executives understand

why IT governance is important, what its

issues are and what their responsibility is

for managing it

• Management guidelines/maturity models—

Help assign responsibility, measure

performance, and benchmark and address

gaps in capability

• Frameworks—Organise IT governance

objectives and good practices by IT

domains and processes, and links them to

business requirements

• Control objectives— Provide a complete

set of high-level requirements to be

considered by management for effective

control of each IT process

IT Governance Implementation Guide:

Using COBIT ® and Val IT TM, 2nd Edition

Provides a generic road map for

implementing IT governance using the

COBIT and Val ITTM resources

COBIT® Control Practices: Guidance to

Achieve Control Objectives for Successful

IT Governance, 2nd Edition—Provides guidance on why controls are worth

implementing and how to implement them

IT Assurance Guide: Using COBIT ®—Provides guidance on how COBIT can be used to support a variety of assurance activities

together with suggested testing steps for all the IT processes and control objectives

The COBIT content diagram depicted in figure 3 presents the primary audiences, their questions on IT governance and the generally

applicable products that provide responses. There are also derived products for specific purposes, for domains such as security or for

specific enterprises.

Framework

A successful organization is built on a solid framework of data and information. The Framework explains how IT processes deliver the information that the business needs to achieve its objectives. This delivery is controlled through 34 high-level control objectives, one for each IT process, contained in the four domains. The Framework identifies which of the seven information criteria (effectiveness, efficiency, confidentiality, integrity, availability, compliance and reliability), as well as which IT resources (people, applications, information and infrastructure) are important for the IT processes to fully support the business process.

To govern IT effectively, it is important to appreciate the activities and

risks within IT that need to be managed. They are usually ordered into

the responsibility domains of plan, build, run and monitor. Within the

COBIT framework, these domains, as shown in figure 8, are called:

Plan and Organise (PO)—Provides direction to solution delivery

(AI) and service delivery (DS)

Acquire and Implement (AI)—Provides the solutions and passes

them to be turned into services

Deliver and Support (DS)—Receives the solutions and makes them

usable for end users

Monitor and Evaluate (ME)—Monitors all processes to ensure that

the direction provided is followed

Control Objectives

The key to maintaining profitability in a technologically changing environment is how well you maintain control. COBIT's Control Objectives provides the critical insight needed to delineate a clear policy and good practice for IT controls. Included are the statements of desired results or purposes to be achieved by implementing the 214 specific, detailed control objectives throughout the 34 IT processes.

Audit Guidelines

To achieve your desired goals and objectives you must constantly and consistently audit your procedures. Audit Guidelines outline and suggest actual activities to be performed corresponding to each of the 34 high-level IT control objectives, while substantiating the risk of control objectives not being met. Audit Guidelines are an invaluable tool for information systems auditors in providing management assurance and/or advice for improvement.

Implementation Tool Set

An Implementation Tool Set, which contains Management Awareness and IT Control Diagnostics, and Implementation Guide, FAQs, case studies from organizations currently using COBIT, and slide presentations that can be used to introduce COBIT into organizations. The new Tool Set is designed to facilitate the implementation of COBIT, relate lessons learned from organizations that quickly and successfully applied COBIT in their work environments, and lead management to ask about each COBIT process: Is this domain important for our business objectives? Is it well performed? Who does it and who is accountable? Are the processes and control formalized?

Management Guidelines

To ensure a successful enterprise, you must effectively manage the union between business processes and information systems. The new Management Guidelines are composed of Maturity Models, to help determine the stages and expectation levels of control and compare them against industry norms; Critical Success Factors, to identify the most important actions for achieving control over the IT processes; Key Goal Indicators, to define target levels of performance; and Key Performance Indicators, to measure whether an IT control process is meeting its objective. These Management Guidelines will help answer the questions of immediate concern to all those who have a stake in enterprise success.

COBIT structure

COBIT covers four domains:

  • Plan and Organize
  • Acquire and Implement
  • Deliver and Support
  • Monitor and Evaluate

Plan and Organization

The Planning and Organization domain covers the use of information & technology and how best it can be used in a company to help achieve the company’s goals and objectives. It also highlights the organizational and infrastructural form IT is to take in order to achieve the optimal results and to generate the most benefits from the use of IT. The following table lists the high level control objectives for the Planning and Organization domain.

HIGH LEVEL CONTROL OBJECTIVES

Plan and Organize

PO1

Define a Strategic IT Plan and direction

PO2

Define the Information Architecture

PO3

Determine Technological Direction

PO4

Define the IT Processes, Organization and Relationships

PO5

Manage the IT Investment

PO6

Communicate Management Aims and Direction

PO7

Manage IT Human Resources

PO8

Ensure Compliance with External Requirements

PO9

Assess and Manage IT Risks

PO10

Manage Projects

PO11

Manage Quality

Acquire and Implement

The Acquire and Implement domain covers identifying IT requirements, acquiring the technology, and implementing it within the company’s current business processes. This domain also addresses the development of a maintenance plan that a company should adopt in order to prolong the life of an IT system and its components. The following table lists the high level control objectives for the Acquisition and Implementation domain.

HIGH LEVEL CONTROL OBJECTIVES

Acquire and Implement

AI1

Identify Automated Solutions

AI2

Acquire and Maintain Application Software

AI3

Acquire and Maintain Technology Infrastructure

AI4

Enable Operation and Use

AI5

Procure IT Resources

AI6

Manage Changes

AI7

Install and Accredit Solutions and Changes

Delivery and Support

The Delivery and Support domain focuses on the delivery aspects of the information technology. It covers areas such as the execution of the applications within the IT system and its results, as well as, the support processes that enable the effective and efficient execution of these IT systems. These support processes include security issues and training. The following table lists the high level control objectives for the Delivery and Support domain.

HIGH LEVEL CONTROL OBJECTIVES

Deliver and Support

DS1

Define and Manage Service Levels

DS2

Manage Third-party Services

DS3

Manage Performance and Capacity

DS4

Ensure Continuous Service

DS5

Ensure Systems Security

DS6

Identify and Allocate Costs

DS7

Educate and Train Users

DS8

Manage Service Desk and Incidents

DS9

Manage the Configuration

DS10

Manage Problems

DS11

Manage Data

DS12

Manage the Physical Environment

DS13

Manage Operations

Monitor and Evaluate

The Monitoring and Evaluation domain deals with a company’s strategy in assessing the needs of the company and whether or not the current IT system still meets the objectives for which it was designed and the controls necessary to comply with regulatory requirements. Monitoring also covers the issue of an independent assessment of the effectiveness of IT system in its ability to meet business objectives and the company’s control processes by internal and external auditors. The following table lists the high level control objectives for the Monitoring domain.

HIGH LEVEL CONTROL OBJECTIVES

Monitor and Evaluate

ME1

Monitor and Evaluate IT Processes

ME2

Monitor and Evaluate Internal Control

ME3

Ensure Regulatory Compliance

ME4

Provide IT Governance


Friday, August 17, 2007

An Introduction to IT Governance

From relative obscurity a few years ago, several factors have come together to make the concept of formal IT governance a good idea for virtually every company, both public and private. Key motivators include the need to comply with a growing list of regulations related to financial and technological accountability, and pressure from shareholders and customers. Here’s a quick primer on the basics of IT governance:

What is IT governance?
Simply put, it’s putting structure around how organizations align IT strategy with business strategy, ensuring that companies stay on track to achieve their strategies and goals, and implementing good ways to measure IT’s performance. It makes sure that all stakeholders’ interests are taken into account and that processes provide measurable results. An IT governance framework should answer some key questions, such as how the IT department is functioning overall, what key metrics management needs and what return IT is giving back to the business from the investment it’s making.
Is it something every organization needs?
Every organization—large and small, public and private—needs a way to ensure that the IT function sustains the organization’s strategies and objectives. The level of sophistication you apply to IT governance, however, may vary according to size, industry or applicable regulations. In general, the larger and more regulated the organization, the more detailed the IT governance structure should be.

What are the drivers that motivate organizations to implement IT governance infrastructures?
Organizations today are subject to many regulations governing data retention, confidential information, financial accountability and recovery from disasters. While none of these regulations requires an IT governance framework, many have found it to be an excellent way to ensure regulatory compliance. By implementing IT governance, you’ll have the internal controls you need to meet the core guidelines of many of these regulations, such as the Sarbanes-Oxley Act of 2002.

What’s the business case? That is, how can I convince top management that we need to do this?
Make sure the right people are selling the concept; if IT is selling it, you’re in trouble. It’s much more effective if a cross-functional team consisting of IT and line-of-business managers makes the case to the board of directors that effective IT management is an important part of the company’s success. The team must be able to explain that the company needs a road map—something to tell decision-makers where the company is, where it needs to be and how best to get there. And of course, talk about the benefits—greater efficiency and accountability, along with reduced risk. Be careful, however, when talking about ROI: A lot of the cost of implementing an IT governance framework can be chalked up to what management should be doing anyway. Simply put, companies have to accept the cost, but they don’t like to hear that.

What are the major focus areas that make up IT governance?
According to the IT Governance Institute, there are five areas of focus:

  • Strategic alignment: Linking business and IT so they work well together. Typically, the lightning rod is the planning process, and true alignment can occur only when the corporate side of the business communicates effectively with line-of-business leaders and IT leaders about costs, reporting and impacts.

  • Value delivery: Making sure that the IT department does what’s necessary to deliver the benefits promised at the beginning of a project or investment. The best way to get a handle on everything is by developing a process to ensure that certain functions are accelerated when the value proposition is growing, and eliminating functions when the value decreases.

  • Resource management: One way to manage resources more effectively is to organize your staff more efficiently—for example, by skills instead of by line of business. This allows organizations to deploy employees to various lines of business on a demand basis.

  • Risk management: Instituting a formal risk framework that puts some rigor around how IT measures, accepts and manages risk, as well as reporting on what IT is managing in terms of risk.

  • Performance measures: Putting structure around measuring business performance. One popular method involves instituting an IT Balanced Scorecard, which examines where IT makes a contribution in terms of achieving business goals, being a responsible user of resources and developing people. It uses both qualitative and quantitative measures to get those answers.

This appears pretty complicated; how do you actually implement everything involved in IT governance?
It doesn’t make sense to reinvent the wheel by starting from scratch, so don’t even try. Start with a framework; there are many to choose from, but using at least one means everything has already been organized and bulletproofed by industry experts worldwide. These frameworks even offer implementation guides. And most companies use a framework: According to a survey by PricewaterhouseCoopers in conjunction with the IT Governance Institute, 95 percent of companies use one of the major IT governance frameworks, while only a few create their own.

Here is a quick rundown on the choices:

CoBIT: This framework, from the Information Systems Audit and Control Association (ISACA), is probably the most popular. Basically, it’s a set of guidelines and supporting toolset for IT governance that is accepted worldwide. It’s used by auditors and companies as a way to integrate technology to implement controls and meet specific business objectives. The latest version, released in May 2007, is CoBIT 4.1. CoBIT is well-suited to organizations focused on risk management and mitigation.

ITIL: The Information Technology Infrastructure Library(ITIL) from the government of the United Kingdom runs a close second to CoBIT. It offers eight sets of management procedures in eight books: service delivery, service support, service management, ICT infrastructure management, software asset management, business perspective, security management and application management. ITIL is a good fit for organizations concerned about operations.

COSO: This model for evaluating internal controls is from the Committee of Sponsoring Organizations of the Treadway Commission. It includes guidelines on many functions, including human resource management, inbound and outbound logistics, external resources, information technology, risk, legal affairs, the enterprise, marketing and sales, operations, all financial functions, procurement and reporting. This is a more business-general framework that is less IT-specific than the others.

CMMI: The Capability Maturity Model Integration method, created by a group from government, industry and Carnegie-Mellon’s Software Engineering Institute, is a process improvement approach that contains 22 process areas. It is divided into appraisal, evaluation and structure. CMMI is particularly well-suited to organizations that need help with application development, lifecycle issues and improving the delivery of products throughout the lifecycle.

There are a lot of framework choices. How do I choose?
Most companies go with CoBIT or ITIL, but others can also fit the bill. For operations, try ITIL. For application development and lifecycle issues, try CMMI. For risk, use CoBIT. CoBIT is also a great umbrella framework. But combining frameworks can also make sense, says Ron Saull, an IT Governance Institute trustee. You might want to use CoBIT as an overall framework; then use ITIL for your operations, CMMI for development and ISO 17799 for security. In fact, combining frameworks is fairly common; the PricewaterhouseCoopers study found that in 65 percent of cases, companies use CoBIT and ITIL together or with lesser-known frameworks. But most importantly, use a framework that fits your corporate culture and that your stakeholders are familiar with. If the company is using one of these frameworks and can leverage it to be its IT governance framework, all the better.

Can we do this alone, or should we get some outside help?
Sometimes it makes sense to get help, and implementing an IT governance framework is one of those times. Not only is internal expertise on IT governance hard to come by, but executives just don’t have the time. The best scenario is usually a combination of the two. Internally, someone really needs to own the process, but getting some help is essential.

What can go wrong if it’s not implemented effectively?
If the IT governance framework isn’t implemented properly, it can directly affect how IT is perceived at a high level. The last thing you want is for IT to be perceived as a cost center that doesn’t produce real value, says Marios Damianides, former international president of ISACA and the IT Governance Institute, and currently a partner for Ernst & Young. Lack of effective implementation also can cause continued issues with project overruns and poor value to cost measurements, not to mention stakeholder dissatisfaction.

What are some tips for making sure it goes smoothly and delivers positive results?
You’ve heard it all before, but here we go: Get executive buy-in. Dedicate a cross-functional team to the process, and get outside help if needed. Clearly delineate the roles and responsibilities of each department and stakeholder in clear terms. Take into account the corporate culture and adjust accordingly. Maintain continual communication during the process. Measure and monitor the progress of the implementation. And don’t consider this a “nice-to-have”—it’s a “need-to-have.”

9 Essential Competencies for Successful C-Level Executives

In 30 years of assessing executive talent, recruitment firm Egon Zehnder International determined that the competencies listed below are core to c- level executive success. The CIO Executive Council has adapted these competencies for assessment and development programs to aid CIOs and their senior staff in achieving their full potential as strategic enterprise leaders.
1. STRATEGIC ORIENTATION
Strategic Orientation is about the ability to think long- term and beyond one’s own area. It involves three key dimensions: business awareness, critical analysis and integration of information, and the ability to develop an action- oriented plan.
2. CUSTOMER IMPACT
Customer Impact is about serving and building value- added relationships with customers or clients, be they internal or external.
3. MARKET KNOWLEDGE
Market Knowledge is about understanding the market in which a business operates. This business context can include the competition, the suppliers, the customer base and the regulatory environment.
4. COMMERCIAL ORIENTATION
Commercial Orientation is about identifying and moving towards business opportunities, seizing chances to increase profit and revenue.
5. RESULTS ORIENTATION
Results Orientation is about being focused on improvement of business results.
6. CHANGE LEADERSHIP
Change Leadership is about transforming and aligning an organization through its people to drive for improvement in new and challenging directions. It is energizing a whole organization to want to change in the same direction.
7. COLLABORATION AND INFLUENCE
Collaboration and Influence are about working effectively with, and influencing those outside of, your functional area for positive impact on business performance.
8. PEOPLE AND ORGANIZATIONAL DEVELOPMENT
People and Organizational Development is about developing the long- term capabilities of others and the organization as a whole, and finding satisfaction in influencing or even transforming someone’s life or career.
9. TEAM LEADERSHIP
Team Leadership is about focusing, aligning and building effective groups both within one’s immediate organization and across functions.

From CIO to CEO

CIOs vs. CEOs
Examining the competency performance data based on interviews and 360-degree assessments of 25,000 executives in the Egon Zehnder database, we find five points:

1. Outstanding CIOs (those ranked in top 15th percentile) score highest in Results Orientation, Strategic Orientation, Change Leadership and Customer Focus.

2. Outstanding CIOs perform significantly better than average CIOs in all competencies except for People and Organizational Development, where they are equivalent.

3. People and Organizational Development scores are relatively low for all types of executives assessed, particularly CFOs.

4. Outstanding CIO scores slightly surpass good CEO scores on most competencies.

5. Outstanding CEOs —the most well-rounded strategic leaders —perform significantly better than outstanding CIOs only in Market Knowledge and External Customer Focus.

How to Improve Your Executive Quotient (EQ)
CIOs who want to devote more of their time and energy to driving business strategy and innovation should focus on developing and leveraging the three competencies most particular to the business strategist: Market Knowledge, Strategic Orientation and Commercial Orientation. (See the “Future-State CIO Model,” above, for more on how each competency maps to three aspects of the CIO role: Function Head, Transformational Leader and Business Strategist.) However, even to get a chance to be a business strategist, CIOs must be strong in foundational competencies such as Change Leadership, Collaboration and Influence, and Function Expertise. Without these, a CIO is unlikely to get a seat at the strategy table, and may in reality be a CIO only in title.

Some terms used freqently

Enterprise resource planning

Enterprise Resource Planning systems (ERPs) integrate (or attempt to integrate) all data and processes of an organization into a unified system. A typical ERP system will use multiple components of computer software and hardware to achieve the integration. A key ingredient of most ERP systems is the use of a unified database to store data for the various system modules.

Ideally, ERP delivers a single database that contains all data for the software modules, which would include:

Manufacturing

Engineering, Bills of Material, Scheduling, Capacity, Workflow Management, Quality Control, Cost Management, Manufacturing Process, Manufacturing Projects, Manufacturing Flow

Supply Chain Management

Inventory, Order Entry, Purchasing, Product Configurator, Supply Chain Planning, Supplier Scheduling, Inspection of goods, Claim Processing, Commission Calculation

Financials

General Ledger, Cash Management, Accounts Payable, Accounts Receivable, Fixed Assets

Projects

Costing, Billing, Time and Expense, Activity Management

Human Resources

Human Resources, Payroll, Training, Time & Attendance, Benefits

Customer Relationship Management

Sales and Marketing, Commissions, Service, Customer Contact and Call Center support

Data Warehouse

and various Self-Service interfaces for Customers, Suppliers, and Employees

ERPs are cross-functional and enterprise wide. All functional departments that are involved in operations or production are integrated in one system. In addition to manufacturing, warehousing, logistics, and Information Technology, this would include accounting, human resources, marketing, and strategic management.

ERP II means open ERP architecture of components. The older, monolithic ERP systems became component oriented.

In the absence of an ERP system, a large manufacturer may find itself with many software applications that do not talk to each other and do not effectively interface.

Other confusing terms:

IBM Information Management System (IMS) is a joint hierarchical database and information management system with extensive transaction processing capability.

Customer relationship management (CRM) is a broad term that covers concepts used by companies to manage their relationships with customers, including the capture, storage and analysis of customer information.

There are three aspects of CRM which can each be implemented in isolation from each other:

  • Operational CRM- automation or support of customer processes that include a company’s sales or service representative
  • Collaborative CRM- direct communication with customers that does not include a company’s sales or service representative (“self service”)
  • Analytical CRM- analysis of customer data for a broad range of purposes

Operational CRM

Operational CRM provides support to "front office" business processes, including sales, marketing and service. Each interaction with a customer is generally added to a customer's contact history, and staff can retrieve information on customers from the database as necessary.

One of the main benefits of this contact history is that customers can interact with different people or different contact “channels” in a company over time without having to repeat the history of their interaction each time.

Consequently, many call centers use some kind of CRM software to support their call centre agents.

Collaborative CRM

Collaborative CRM covers the direct interaction with customers, for a variety of different purposes, including feedback and issue-reporting. Interaction can be through a variety of channels, such as internet, email, automated phone (Automated Voice Response AVR), SMS or through mobile email.

Studies have shown that feedback through SMS or mobile email provides greater efficiency relative to alternative channels. Part of this has to do with the ease of use of particular feedback channels. A study of telephone feedback showed that if consumers cannot get through to customer service centres, 31% hang up and go to a competitor. 24% of consumers give up all together. In addition, in a separate study, it was found that a bad experience with a customer call centre led to 56% of callers to stop doing business with the organisation concerned. (Ian Brooks May 2006) Other studies have shown similar findings; a separate study in the trade journal Quality Progress showed that only 4% of unsatisfied customers complain, whereas 96% of consumers go to competitors. Additionally, 90% of defecting customers do not come back (Scriabina, Fomichov 2005).

Feedback through text has many advantages; not only does it allow the consumer to give feedback at the point of experience (in-situ) but additionally it allows companies to capture insight from a wider consumer base.


The objectives of Collaborative CRM can be broad, including cost reduction and service improvements.

Analytical CRM

Analytical CRM analyses customer data for a variety of purposes including

  • design and execution of targeted marketing campaigns to optimise marketing effectiveness
  • design and execution of specific customer campaigns, including customer acquisition, cross-selling, up-selling, retention
  • analysis of customer behaviour to aid product and service decision making (e.g. pricing, new product development etc.)
  • management decisions, e.g. financial forecasting and customer profitability analysis
  • prediction of the probability of customer defection (churn).

Analytical CRM generally makes heavy use of predictive analytics.

Technology strategy

A Technology strategy (as in Information technology) is a planning document that explains how information technology should be utilized as part of an organization's overall business strategy. The document is usually created by an organization's Chief Information Officer (CIO) or technology manager and should be designed to support the organization's overall business plan.

Relationship between strategy and enterprise technology architecture

A technology strategy document typically refers to but does not duplicate an overall enterprise architecture. The technology strategy may refer to:

Wednesday, August 15, 2007

Starters for IT


So to begin here are some laws i.e. rules of IT game:

Moore's Law is the empirical observation made in 1965 that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months.It is attributed to Gordon E. Moore (born 1929), a co-founder of Intel. Although it is sometimes quoted as every 18 months, Intel's official Moore's Law page, as well as an interview with Gordon Moore himself, states that it is every two years.Under the assumption that chip "complexity" is proportional to the number of transistors, regardless of what they do, the law has largely held the test of time to date.Moore's law is not about just the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.

Wirth's law in computing was made popular by Niklaus Wirth in 1995.The law states
Software gets slower faster than hardware gets faster.

or

Software is decelerating faster than hardware is accelerating.

Hardware is clearly getting faster over time, and some of that development is quantified by Moore's law; Wirth's law points out that this does not imply that work is actually getting done faster. Programs tend to get bigger and more complicated over time, and sometimes programmers even rely on Moore's law to justify writing slow code, thinking that it won't be a problem because the hardware will get faster anyway.

As an example of Wirth's law, one can observe that booting a modern PC with a modern operating system usually takes longer than it did to boot a PC five or ten years ago.Software is being used to create software. By using Integrated Development Environments, Compilers and Code Libraries, programmers are further and further divorced from the machine and closer and closer to the software user's needs. This results in many layers of interpretation which take a complex requirement in human understandable form and convert it into a very large number of the extremely limited set of instructions that can be performed by a computer.

Gates' Law is a humorous and ironic observation that the speed of commercial software generally slows by fifty percent every 18 months.Though the law's name is meant to refer to Bill Gates, Gates himself did not actually formulate it. Rather, the term is attributed to Gates based on the noted tendency of Microsoft products to slow down with each successive feature or patch.

Bell's Law of Computer Classes: "Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry."
Established market class computers aka platforms are introduced and continue to evolve at roughly a constant price.In 2005 the computer classes include: mainframes (60's); minicomputers (70's); personal computers and workstations evolving into a network enabled by Local Area Networking or Ethernet (80's); web browser client-server structure that were enabled by the Internet (90's); web services e.g. Microsoft's .Net (2000's) or the Grid; cell phone sized devices c(2000); Wireless Sensor Networks aka motes (>c2005). Bell predicts home and body area networks will form by 2010.

Brooks's law was stated by Fred Brooks in his 1975 book The Mythical Man-Month as "Adding manpower to a late software project makes it later." Likewise, Brooks memorably stated "The bearing of a child takes nine months, no matter how many women are assigned." While Brooks's law is often quoted, the line before it in The Mythical Man-Month is almost never quoted: "Oversimplifying outrageously, we state Brooks's Law."One reason for the seeming contradiction is that software projects are complex engineering endeavors, and new workers on the project must first become educated in the work.Another significant reason is the communication overheads increase as the number of people increase.

Metcalfe's law states that the value of a telecommunications network is proportional to the square of the number of users of the system (n2). First formulated by Robert Metcalfe in regard to Ethernet, Metcalfe's law explains many of the network effects of communication technologies and networks such as the Internet and World Wide Web.
The law has often been illustrated using the example of fax machines: A single fax machine is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases.

Since a user cannot connect to itself, the reasoning goes, the actual calculation is the number of diagonals and sides in an n-gon (see also the triangular numbers):
\frac{n(n-1)}{2}

However, that value, which simplifies to (n2n) / 2, is Big O proportional to the square of the number of users, so this remains the same as Metcalfe's original law.

Metcalfe's Law can be applied to more than just telecommunications devices. Metcalfe's Law can be applied to almost any computer systems that exchange data. Examples of applications include:


Reed's law is the assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network.

The reason for this is that the number of possible sub-groups of network participants is 2^N - N - 1 \, , where N is the number of participants. This grows much more rapidly than either

  • the number of participants, N, or
  • the number of possible pair connections, \frac{N(N-1)}{2} (which follows Metcalfe's law)

so that even if the utility of groups available to be joined is very small on a per-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system.

Given a set A of N people, it has 2N possible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element of A one of two possibilities: whether to include that element, or not.

However, this includes the (one) empty set, and N singletons, which are not properly subgroups. So 2NN − 1 subsets remain, which is exponential, like 2N.

From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23-4):

"Even Metcalfe's Law understates the value created by a group-forming network as it grows. Let's say you have a GFN with n members. If you add up all the potential two-person groups, three-person groups, and so on that those members could form, the number of possible groups equals 2n. So the value of a GFN increases exponentially, in proportion to 2n. I call that Reed's Law. And its implications are profound."


That all for today..........Will be getting deeper as we go on.........