Sunday, December 2, 2007

Wednesday, November 21, 2007

The ROI of Open Source

The ROI of open-source software is a contentious issue.The ROI of upgrading a Windows installation versus switching to Linux and have concluded that it is less expensive to stick with Windows.Switching from Windows to Linux is the worst-case ROI scenario. After all, the new platform requires training and perhaps hiring new personnel—always expensive propositions—versus merely paying for licenses.

A more important question is, can open source generate real ROI elsewhere? Yes. Oregon State University (OSU), for example, has websites that visitors need to search, so the school bought a Google appliance for about $125,000 per year. Two years later, OSU’s IT department, aided by the Open Source Lab, replaced the appliance with an open-source search product called Nutch (license cost: $0). Nutch is not as easy to use as the Google software, so additional administration costs run to about $10,000 yearly. The overall five-year payback, however, even when you consider additional hardware and engineering time, still produced an internal rate of return of 2,300 percent.

The key to success is determining which projects make sense for open source. To get started, treat each product individually. Savvy organizations consider both commercial and open-source options for projects, and choose the right product for the given situation. Then make sure you evaluate using the proper time horizon. A single heads-up comparison between a commercial product and its open-source counterpart may not offer good open-source ROI, because the costs of training and switching can outweigh the cost of a commercial license. But if you extend the time horizon to the realistic life of the application, it may tip the balance toward open source. Finally, take the entire organization into account. While a specific open-source project may not offer great ROI, the cost benefits of pioneer applications often materialize downstream in later projects that are able to adopt the open-source package. Even if you purchase enterprise licenses for your commercial products so that your marginal cost for a new application is effectively zero, keep in mind that someday, when those licenses are up for renewal, that marginal cost may be much higher.

This column hasn’t touched on any of the other reasons organizations use open source: flexibility, reduced operational costs through not needing to track license compliance, and greater control of the organization’s software stack, since there are no forced upgrades or product end-of-life announcements. Because ROI is so tangible, however, it is critical to address it explicitly. Just keep in mind that there is no single answer; you need to find the right choice for your organization and your application.

Saturday, November 10, 2007

Strategy--- an introduction

What is strategy?

The word “strategy” derives from the ancient Athenian position of strategos.
Generally, the definition of strategy is regarded as a complex combination and there are lots of debates about the definition itself.

“A number of reasons contribute to this complexity. First, the field represents the
convergence of multiple disciplines, including economics, organization theory, general business, marketing, finance, and geography (to name but a few). As a result, strategy is often viewed through different lenses, depending on one’s background and purpose. Second, and perhaps more important, business strategy is a very young field. As a result, not all of the concepts and approaches to analysis are yet well established or agreed on.” (http://www.ache.org/PUBS/Luke1.pdf, 2005-11-22)

Mintzberg has created a “five-P-model” for the definition of strategy,which provides some clues to the rich meaning of the concept.

Strategy as Plan:
Strategy is a plan- some sort of consciously intended course of action, a guideline (or set of guidelines) to deal with a situation. By this definition, strategies have two essential characteristics: they are made in advance of the actions to which they apply, and they are developed consciously and purposefully.
Strategy as Ploy:
A strategy can be a ploy too; really just a specific ‘manoeuvre’ intended to outwit an opponent or competitor.
Strategy as Pattern:
Defining strategy as a plan is not sufficient; we also need a definition that encompasses the resulting behaviour. A third definition is proposed: strategy is a pattern- specifically, a pattern in a stream of action. In other words, by this definition, strategy is consistency in behaviour, whether or not intended.
Strategy as Position:
The fourth definition is that strategy is a position- specifically, a means of locating an organization in what organization theorists like to call an ‘environment.’ By this
definition, strategy becomes the mediating force between organization and environment, that is, between the internal and the external context.
Strategy as Perspective:
The fifth definition looks inside the organization, indeed inside the heads of the
collective strategist. Here, strategy is a perspective, its content consisting not just a chosen position, but of an ingrained way of perceiving the world.


“Strategy content: basically the “what” of strategy. This means defining what strategic decisions are about and what their intention is. The content perspective also address such questions as where are we going and what is the scope of the business. “

“Strategy context: the “where” of strategy. This is the set of factors that comprise the setting for a strategy. This includes the internal context of the organization as well as the characteristics of the external context in the opening environment.”

“Strategy process: the “how” of strategy. This details who is involved in the process and when activities take place. It is the story, the drama and the list of players in the strategy as well as the characteristics of the process itself.


Mintzberg & Waters model - Intended and emergent
Strategies


According to Mintzberg and Waters (1985), there are five kinds of strategies in their model emergent strategy, intended strategy, deliberate strategy, realized
strategy and unrealized strategy. Their definition of these is:
Emergent strategies can be seen as responses to unexpected opportunities and problems and are usually developed from the locations at which business-level strategies are usually implemented, i.e. within business units and not at corporate headquarters. The pure definition of emergence requires the absence of intentions.
Realized strategy is a blend of intentions and emergence which can be interpreted by reference to the strength of pressure from the external environment—a kind of environmental determinism.
Intended strategy is strategy as conceived of by the top management team. Even here, rationality is limited and the intended strategy is the result of a process of negotiation, bargaining, and compromise, involving many individuals and groups within the organization.
Mintzberg and Waters mentioned that realized strategy – the actual strategy that is implemented – is only partly related to that which was intended (Mintzberg suggests only 10–30 percent of intended strategy is realized). The primary determinant of realized strategy is what Mintzberg terms emergent strategy – the decisions that emerge from the complex processes in which individual managers interpret the intended strategy and adapt to changing external circumstances. This model should also been seen as a process and especially if you include the variable of time. As show in the model below the realized strategy effects the intended strategy as times goes by. This is an important part of the model since it shows that current strategies will affect future strategies. There are two extreme types of organizations, the ones that have only deliberate strategies and the ones that have only emergent strategies. These two pure forms are very rare and perhaps there is no organization that has one of these pure types of processes. For a pure deliberate strategy, the organization must have pure intentions with a relative concrete level of detail. This plan has to be carried out exactly as intended. For a strategy to perfectly emergent there has to be consistency in action over time but without any intentions. Except for these two pure types of strategies that are extremely rare according to Mintzberg & Waters (1985 pp 257-258) but they argue that between those two extremes are several different type of strategies that are common in companies today. Mintzberg & Waters (1985) classifies eight different types of strategies:

1. The planned strategy
The planned strategy is clear intentions back by formal control. The leader is the centre of authority with their intentions being very clear and precise and the goal is to transform the intention to collective action with minimum distortion. Programs and systems are built in to the plan to ensure that no one acts in another way then intended.
For this type of strategic process to be effective the environment has to be extremely stable or the organization has to be able to predict it with great accuracy. When organizations put large quantities of resources in a mission or project they might nottolerate unstable environment. When they have plan several years ahead and don’t allow avoiding behaviour and commit themselves firmly. An example of this can be mining companies.

2. Entrepreneurial strategy
The second type of strategy there is has tolerance for a little emergent strategy, but is still very much planned. The owner controls the organization tightly and can impose his vision or direction on the organization. This type of strategy is very common in young organizations and in entrepreneurial organizations. The central actor is the one that places the organization were he/she wants to in the world. Compared to the planned strategy the intentions are harder to identify and are less specific, but as long the actors in the organizations respond to the will of the leader the strategy appear to be rather deliberate. Because the strategy comes from a single person there can be sudden changes in it and reformulation isn’t unusual. The adaptability of the entrepreneurial strategy is what distinguishes it from the planned one. Visions in the brain of a person are more flexible then articulated ones. The adoption and “emergentness” of planned strategies are discouraged by the articulation. Psychologists have shown that articulation of strategy manifests it, impending willingness to change it.

3. Ideological Strategy
Vision can be collective – when the members of an organizations share a vision and the members identify so strongly with it that they pursue it as an ideology. This leads to patterns in their behaviour so that clear realized strategies can be identified. Since an ideological strategy is likely to overt and becoming articulated one can see intentions. That is why one can say that this type of strategy is deliberate. These intentions would be viewed as organizational, differing from the entrepreneurial and planned strategy by being embraced by everyone in the organization and not originate from one centre and then being accepted passively. The collective vision makes it harder to change, because all members of the organization have to accept the changes. Moreover, the ideology is rooted in traditions and precedents. Therefore people resist changing it. Mintzberg &
Waters has not yet studied any organization dominated by an ideology but such
strategies seems to occur in certain organizations describe in the literature.

4. Umbrella strategy
For the umbrella organization Mintzberg & Waters relax the condition of tight control over the actors in the organizations and in some cases control over the environment. Leaders have only partial control over the members of the organization and can design the umbrella type of strategy. An umbrella strategy is when there are general guidelines for behaviour, defined boundaries and the other actors in the organization can manoeuvre within them. This means that strategies can emerge within these boundaries.
The umbrella strategy can not only be labelled as deliberate and emergent but also “deliberate emergent” in the sense that the central leadership creates conditions which allow strategies to emerge. Like the entrepreneurial strategy there is a certain vision emanating from the central leadership, but in the umbrella strategy don’t the ones controlling the vision also control the realization. One example of the umbrella strategy is NASA during the 1960, when they focused their efforts to put a man on the moon.
Within this specific target several different strategies emerged, as various technical problems were solved by thousands of different specialists.

5. Process Strategy
The process Strategy is similar to the umbrella strategy. The leadership functions in a organizations in which actors must have considerable discretion to determine the outcome. This is because the environment is unpredictable and uncontrollable. Instead of controlling strategy on a general level with boundaries and target the leadership influences the strategy indirectly. In other words they control the process of strategy making instead of the content of the strategy. This results in a behaviour that would be deliberate in one respect but emergent in another. The leadership designs the system from which patterns of action evolve from.

6. Unconnected strategy
The unconnected strategy is perhaps the most straightforward of all. One part of the organization, a subunit or sometimes even an individual is able to realize its own pattern in its stream of action. Since these unconnected strategies doesn’t come from the central leader ship or from intentions from the whole organization the can be considered relatively emergent. But for the subunit/individual they clearly can be deliberate or emergent depending on the prior existence of intentions. Thus the unconnected strategy may be deliberate or emergent for the actors involved but always emergent from the perspective of the organization.

7. Consensus strategy:
In this strategy the condition for prior intentions are totally dropped, this type of strategy is clearly emergent. In this strategy different actors converge on the same pattern or theme so that it becomes pervasive in the organizations, without need for central direction or control. The consensus strategy grow out of the mutual adjustment among the different actions as they learn from each other and from their responses the environment and thereby finds a common pattern that works for the organization. This means that the convergence is not driven by intentions by management or by prior intentions shared by the organizations as a whole; rather it evolves around the results of a host of individual actions. Sometimes actors might promote the consensus and try to negotiate others to accept it, but the point it that this strategy comes more from collective actions then from collection intentions. One example of this could be a university that finds itself over the years favouring the sciences over the humanities as its members came to realize that this is where its real strengths lie.

8. Imposed strategy
This time the strategy comes from outside the organization, its imposed on the
organization. This means that the environment can directly force the organization into a pattern in its stream of actions regardless what the central control does. The clearest case is when a external group or individual with a great influence over the organization imposes a strategy on the organization. For example, state-owned Air Canada was forced by the minister to buy a particular type of plane. The strategy was clearly deliberate but not by anyone in the organization. Given the inability to resist, the organization had to pursuit the given strategy and thus it became deliberate for the organization. Sometimes can the environment rather than individual/group that impose strategies on organizations by restricting their options. Once again Air Canada can serve as an example. Did Air Canada really choose to fly jet aeroplanes and later wide body aeroplanes? Could any world class airline decide otherwise? Again the organization has to make the external strategies, imposed on them, internal. In reality the organizations have to compromise between determinism and free choice. Environment seldom pre-empt all choice and just as rare the environment seldom offers unlimited choice. As most real world strategies have some umbrella strategy characteristics, so to does the environment set boundaries for most organization.

These eight types of strategic processes is a big part of Mintzberg & waters (1985) theory. They claim that there aren’t many, if any, companies that can be classified as the two extremes but that they have the characteristics similar to one of the above mentioned types of processes. Thus all companies that are taking part of this study should be describable by one of them.

Thursday, September 27, 2007

Friday, September 21, 2007

A Famous speech by Steve Jobs "Stay Hungry - Stay Foolish

Another famous speech, this one by Steve Jobs.

Thank you. I'm honored to be with you today for your commencement from one of the finest universities in the world. Truth be told, I never graduated from college and this is the closest I've ever gotten to a college graduation.

Today I want to tell you three stories from my life. That's it. No big deal. Just three stories. The first story is about connecting the dots.

I dropped out of Reed College after the first six months but then stayed around as a drop-in for another eighteen months or so before I really quit. So why did I drop out? It started before I was born. My biological mother was a young, unwed graduate student, and she decided to put me up for adoption. She felt very strongly that I should be adopted by college graduates, so everything was all set for me to be adopted at birth by a lawyer and his wife, except that when I popped out, they decided at the last minute that they really wanted a girl. So my parents, who were on a waiting list, got a call in the middle of the night asking, "We've got an unexpected baby boy. Do you want him?" They said, "Of course." My biological mother found out later that my mother had never graduated from college and that my father had never graduated from high school. She refused to sign the final adoption papers. She only relented a few months later when my parents promised that I would go to college.

This was the start in my life. And seventeen years later, I did go to college, but I naïvely chose a college that was almost as expensive as Stanford, and all of my working-class parents' savings were being spent on my college tuition. After six months, I couldn't see the value in it. I had no idea what I wanted to do with my life, and no idea of how college was going to help me figure it out, and here I was, spending all the money my parents had saved their entire life. So I decided to drop out and trust that it would all work out OK. It was pretty scary at the time, but looking back, it was one of the best decisions I ever made. The minute I dropped out, I could stop taking the required classes that didn't interest me and begin dropping in on the ones that looked far more interesting.

It wasn't all romantic. I didn't have a dorm room, so I slept on the floor in friends' rooms. I returned Coke bottles for the five-cent deposits to buy food with, and I would walk the seven miles across town every Sunday night to get one good meal a week at the Hare Krishna temple. I loved it. And much of what I stumbled into by following my curiosity and intuition turned out to be priceless later on. Let me give you one example.

Reed College at that time offered perhaps the best calligraphy instruction in the country. Throughout the campus every poster, every label on every drawer was beautifully hand-calligraphed. Because I had dropped out and didn't have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and sans-serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can't capture, and I found it fascinating.

None of this had even a hope of any practical application in my life. But ten years later when we were designing the first Macintosh computer, it all came back to me, and we designed it all into the Mac. It was the first computer with beautiful typography. If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts, and since Windows just copied the Mac, it's likely that no personal computer would have them.

If I had never dropped out, I would have never dropped in on that calligraphy class and personals computers might not have the wonderful typography that they do.

Of course it was impossible to connect the dots looking forward when I was in college, but it was very, very clear looking backwards 10 years later. Again, you can't connect the dots looking forward. You can only connect them looking backwards, so you have to trust that the dots will somehow connect in your future. You have to trust in something--your gut, destiny, life, karma, whatever--because believing that the dots will connect down the road will give you the confidence to follow your heart, even when it leads you off the well-worn path, and that will make all the difference.

My second story is about love and loss. I was lucky. I found what I loved to do early in life. Woz and I started Apple in my parents' garage when I was twenty. We worked hard and in ten years, Apple had grown from just the two of us in a garage into a $2 billion company with over 4,000 employees. We'd just released our finest creation, the Macintosh, a year earlier, and I'd just turned thirty, and then I got fired. How can you get fired from a company you started? Well, as Apple grew, we hired someone who I thought was very talented to run the company with me, and for the first year or so, things went well. But then our visions of the future began to diverge, and eventually we had a falling out. When we did, our board of directors sided with him, and so at thirty, I was out, and very publicly out. What had been the focus of my entire adult life was gone, and it was devastating. I really didn't know what to do for a few months. I felt that I had let the previous generation of entrepreneurs down, that I had dropped the baton as it was being passed to me. I met with David Packard and Bob Noyce and tried to apologize for screwing up so badly. I was a very public failure and I even thought about running away from the Valley. But something slowly began to dawn on me. I still loved what I did. The turn of events at Apple had not changed that one bit. I'd been rejected but I was still in love. And so I decided to start over.

I didn't see it then, but it turned out that getting fired from Apple was the best thing that could have ever happened to me. The heaviness of being successful was replaced by the lightness of being a beginner again, less sure about everything. It freed me to enter one of the most creative periods in my life. During the next five years I started a company named NeXT, another company named Pixar and fell in love with an amazing woman who would become my wife. Pixar went on to create the world's first computer-animated feature film, "Toy Story," and is now the most successful animation studio in the world.

In a remarkable turn of events, Apple bought NeXT and I returned to Apple and the technology we developed at NeXT is at the heart of Apple's current renaissance, and Lorene and I have a wonderful family together.

I'm pretty sure none of this would have happened if I hadn't been fired from Apple. It was awful-tasting medicine but I guess the patient needed it. Sometimes life's going to hit you in the head with a brick. Don't lose faith. I'm convinced that the only thing that kept me going was that I loved what I did. You've got to find what you love, and that is as true for work as it is for your lovers. Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work, and the only way to do great work is to love what you do. If you haven't found it yet, keep looking, and don't settle. As with all matters of the heart, you'll know when you find it, and like any great relationship it just gets better and better as the years roll on. So keep looking. Don't settle.

My third story is about death. When I was 17 I read a quote that went something like "If you live each day as if it was your last, someday you'll most certainly be right." It made an impression on me, and since then, for the past 33 years, I have looked in the mirror every morning and asked myself, "If today were the last day of my life, would I want to do what I am about to do today?" And whenever the answer has been "no" for too many days in a row, I know I need to change something. Remembering that I'll be dead soon is the most important thing I've ever encountered to help me make the big choices in life, because almost everything--all external expectations, all pride, all fear of embarrassment or failure--these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.

About a year ago, I was diagnosed with cancer. I had a scan at 7:30 in the morning and it clearly showed a tumor on my pancreas. I didn't even know what a pancreas was. The doctors told me this was almost certainly a type of cancer that is incurable, and that I should expect to live no longer than three to six months. My doctor advised me to go home and get my affairs in order, which is doctors' code for "prepare to die." It means to try and tell your kids everything you thought you'd have the next ten years to tell them, in just a few months. It means to make sure that everything is buttoned up so that it will be as easy as possible for your family. It means to say your goodbyes.

I lived with that diagnosis all day. Later that evening I had a biopsy where they stuck an endoscope down my throat, through my stomach into my intestines, put a needle into my pancreas and got a few cells from the tumor. I was sedated but my wife, who was there, told me that when they viewed the cells under a microscope, the doctor started crying, because it turned out to be a very rare form of pancreatic cancer that is curable with surgery. I had the surgery and, thankfully, I am fine now.

This was the closest I've been to facing death, and I hope it's the closest I get for a few more decades. Having lived through it, I can now say this to you with a bit more certainty than when death was a useful but purely intellectual concept. No one wants to die, even people who want to go to Heaven don't want to die to get there, and yet, death is the destination we all share. No one has ever escaped it. And that is as it should be, because death is very likely the single best invention of life. It's life's change agent; it clears out the old to make way for the new. right now, the new is you. But someday, not too long from now, you will gradually become the old and be cleared away. Sorry to be so dramatic, but it's quite true. Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma, which is living with the results of other people's thinking. Don't let the noise of others' opinions drown out your own inner voice, heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.

When I was young, there was an amazing publication called The Whole Earth Catalogue, which was one of the bibles of my generation. It was created by a fellow named Stuart Brand not far from here in Menlo Park, and he brought it to life with his poetic touch. This was in the late Sixties, before personal computers and desktop publishing, so it was all made with typewriters, scissors, and Polaroid cameras. it was sort of like Google in paperback form thirty-five years before Google came along. I was idealistic, overflowing with neat tools and great notions. Stuart and his team put out several issues of the The Whole Earth Catalogue, and then when it had run its course, they put out a final issue. It was the mid-Seventies and I was your age. On the back cover of their final issue was a photograph of an early morning country road, the kind you might find yourself hitchhiking on if you were so adventurous. Beneath were the words, "Stay hungry, stay foolish." It was their farewell message as they signed off. "Stay hungry, stay foolish." And I have always wished that for myself, and now, as you graduate to begin anew, I wish that for you. Stay hungry, stay foolish.

Thank you all, very much.

Sunday, September 16, 2007

Setting up your own website...

This is a compilation of several articles from rediff.com

Hosting options:

So you want to tell the world you have arrived or you have a business and want to expand its reach, whatever your motivation, setting up your own website is a great way to get noticed.

Before you shrug off the idea pleading ignorance of coding, Photoshop, web design and everything else related to the topic, take a look at our easy-to-follow guide to setting up your very own website.

The first decision you need to take is whether you want a free (almost free) web page or if you're willing to shell out a certain amount on a paid domain name and hosting services. (The domain name is the name through which people can access your WebPages. Hosting is the service that provides you space on the internet to save the web pages that you have created.)

Free hosting services
Freewebs.com, tripod.lycos.co.uk, geocities.yahoo.com, freewebsites.com and many others provide you with space on their servers where you can host your website for free. For most of these free hosting services, you do not need to buy a domain name. These services would provide you with a sub domain name on their server.

For instance, I may host my web pages for free on geocities and anyone on the world wide web may access my pages on 'http://geocities.com/ankurjain/index.html'. Some of these free hosting servers may also provide you with easy-to-use utilities to create web page polls, feedback forms, albums and blogs on the web pages you host on their servers.

However, if you are a firm believer in the saying 'Nothing comes for free', you aren't far wrong. The catch here is these sites will take up a prominent space on the pages you host on their servers (a small, but prime spot). The company will use this space to display advertisements relevant to the content of the webpage and relevant to the visitors to your page.

For instance, if you are an expert on financial planning and you plan to host some articles on financial planning, then the service provider may use the space to post an advertisement from a company dealing in mutual funds or tax consultancy firm.

So, while this option is very handy for people with tight budgets, if you are using a free service to host your company's web page, be warned that your clients might just end up seeing your competitor's advertisement on your home page.

That's where paid hosting services come in.

Paid hosting
Once you decide on your hosting options, the next step is to book a domain name. To buy a domain visit any of the following websites: godaddy.com, buydomain.com, net4omains.com.

If I am hosting a website about myself, then I might want to buy the domain ankurjain.com. Since domain names are unique, only one website can exist with the name ankurjain.com. If someone has already booked this domain name, I could opt for ankurjain.in or ankurjain.name among others.

After checking the availability of the domain and the other domain options, the website will show you the annual price listing for each. You will need to renew the registration at the end of the end of every year if you want to continue using the domain.

If you take the time to search, you'll find a number of good offers by Indian hosting providers for web hosting on the net. You can also just choose to host your site with your registrar (the place you buy your domain from). For example, if you buy a domain from and host your website at GoDaddy.com, you can get 500 MB of web space, a 25-GB data transfer limit and 100 e-mail IDs with both POP3 and web mail access for just $3.95 (approx Rs 170) a month!

What hosting provider you eventually choose depends solely on your needs and budget, but you should trawl the web for user reviews that are a good pointer to the most user-friendly and dependable service providers.

Most of the hosting services provide you with complete know how on how to setup your website on their servers. They provide easy to understand step-by-step documents for uploading the pages created/designed by you on to their servers.

The only drawback is that you have to have a credit card!

Designing a homepage :

What affects your online presence most, either as an individual or as a company, is, perhaps, your website. Your home page is the one thing that defines you most, for the millions of strangers online. Today, with so many free options for web-hosting, it's almost a crime not to have a functional and regularly updated site.

We will walk you through the basics of designing a simple site. You need to decide where and how your site will be hosted. All the millions of options you have can be categorised into either free or paid solutions. Once you've decided on the type you want, you can go ahead and start planning the look of your site.

A common misconception is that coders who know HTML and other web design languages can only design sites. In this age of software empowerment, anyone to do anything: Web design has not been spared. Design programs such as Microsoft FrontPage and Macromedia Dreamweaver make sure that a good design is simple to achieve.

Frontpage

FrontPage has always had the distinction of being really easy to use, but has generally been less respected as a web design tool by hard-core designers. However, for those just starting out with Web design, there is no simpler tool. As with most Microsoft products, you'll take almost no time to get to grips with the interface.

Let's take a look at MS FrontPage:

~ When you run FrontPage, it starts off with a blank page.
~ You can choose to build a site from a template. There are several templates available -- choose one that matches your needs.
~ We chose to build a 'Corporate Presence' site.
~ Change the logos and add suitable text where clearly marked, and save each page.
~ Check the pages you've designed in different browsers by going to File > Preview in Browser.
~ Once all the pages have been edited, upload the entire web folder to your hosting server.
~ Check all the pages again, and your site is done.

Though this sounds a little too simple, the fact is FrontPage is designed that way, so the easiest way to experience its simplicity is to install it and try it yourself.

Web Components:

Web components in FrontPage are very important. FrontPage calls Flash movies, hit counters, buttons, etc 'Web Components'. The simplest way to see what each does is to open a blank page, insert one of each component and then preview the page in your favorite browser.

Pictures:

Inserting a picture in FrontPage is very easy: just go to Insert > Picture and choose the appropriate option.

Themes:

Use a theme to get a predefined colour scheme and save your time trying to think up one.

Import:

If you already have a site, and are looking to modify the existing site rather than building a new one from scratch, you can choose to import your site by going to File > Import.

Dreamweaver

Macromedia Dreamweaver is the preferred professional Web design tool. Though it is not as easy to use as FrontPage, it is considered to be much more powerful. Of course, this does not mean it is impossible to use, and it is actually quite simple to accomplish basic tasks, just like in Microsoft FrontPage.

When you run Dreamweaver, you are presented with a blank page. You can just close that page and go to File > New, and in the 'New Document' dialog that pops up, choose 'Page Designs' from the 'Category' pane on the left. You will see a long list of options in the 'Page Designs' pane in the middle, and clicking on one will show you a preview of what that design looks like in the 'Preview' pane on the right.

Choose one that suits your needs for the page you're designing and click 'Create'. Just like in FrontPage, you are shown a page with blank images and dummy text. All you have to do is edit the text and images by double clicking on them.

Once you are satisfied with your final page, press [Ctrl] + [Shift] + [S] and choose where to save the web page. Create all the pages you want for your site and save all pages in the same folder relative to the index.html page. If you need to make new folders, do so and make sure to keep all images in a separate 'images' folder. This will help you sort your data more efficiently.

Once you are done creating all the desired pages, make sure to right-click to see the many options easily available to you. Preview all of them in a browser -- you can do this by pressing [F12]. The last step is to upload all the pages and images to your web server. Remember to keep all relative paths exactly the same as they are in your root folder (where 'index.html' is saved) on your hard disk.

Inserting Objects:

In Dreamweaver, you can insert a large variety of objects into the current page. Just click on the 'Insert' menu to see the list of available objects. Once done, don't forget to save your page.

Right-click:

Dreamweaver's right-click menu is quite exhaustive. Right-click anywhere on the page to see the long list of options available.

Tables:

Drawing tables is easy in both Dreamweaver and FrontPage. All you need to do is look for the 'Insert Table' option in the 'Insert' dropdown menu.

Task Panes:

In order to use Dreamweaver more efficiently, you'll need to learn to use the various task panes provided on the top, bottom and right-hand side. The best way to learn is to use each task pane and get the hang of the software.

Covering both the tools in detail would require a big fat book. Just start using these tools -- look up Internet tips and tutorials, and basic information about them for help.

Avoid these errors:

More often than not, web designers get carried away by technology and create websites that have a lot of frills and fluff but nothing that actually tells the visitor what the website offers. There are hundreds of websites sporting constantly changing images and floating objects, but falling desperately short of holding the visitor's attention long enough to find out more. What you need to keep in mind if you want your website to be seen and visited regularly, is that (as in life) looks aren't everything.

Here are some of the tools you need to use with care:

Frames

Five years ago, frames were the 'in' thing -- they let one or two parts of your site remain constant, while only one frame changed. This idea caught on like wildfire, and every second designer, at one point, was designing sites with frames. However, when content on the web started to grow, search engines became more popular, and the concept of bookmarking caught on. Frames didn't gel well with bookmarks.

In the context of search, the concept of frames is flawed: it uses one HTML page that gets content from two or more other pages and places the content in pre-defined parts of the page. Though this sounds like a good idea, search robots will visit the index page and catalogue it as the content of your site -- ignoring the entire HTML code that refers to other pages. This led to sites with frames dropping rapidly in search engine rankings. This was perhaps the major reason why frames are no longer popular, after all, search engine optimisation (SEO) is a full-time business now, and is also a major aspect of web design.

Misleading or useless page titles

The page title is the text that is supposed to describe the current page, or at least your site as a whole. This text appears between the '' code tags in an HTML page. Sadly, most amateur designers fail to notice the importance of titles, and leave it as "Welcome to xyz.com".

Again, this is very wrong when you take into account the way search engines display results: almost all search engines make the link to the page title. Even if a search throws up your page as a result, all a potential visitor sees is the text "Welcome to xyz.com", and a few lines from the site.

The page title is what will draw a visitor to your site, and as the latest generation of internet surfers is quite comfortable using search engines, they have learnt to ignore sites with such title tags.

Let's say you're in the business of selling computer hardware, and your site is called 'xyz.com'. Another company also sells IT products, and has a site called 'abc.com'. Let's assume a potential visitor or customer searches for "computer hardware vendor India" in a search engine and gets the following results:

"Welcome to xyz.com

We believe in finding the right solutions for your pocket. Whatever your budget for computer hardware, we have a configuration that will fit your needs. Vendors of --

www.xyz.com/52K-8 Aug 05. Cached - Similar pages"

"Something.com

Your source for in-depth computer hardware info.

www.something.com/524K-10 Aug 05. Cached - Similar pages"

"Abc.com: The best computer systems in India at unbeatable prices

ABC is the leading computer hardware information resource on the 'Net. We have all the information you need about computer hardware, including vendors.

Abc is a renowned computer hardware vendor based in Mumbai. We deliver anywhere in India within 48 hours, and have over 1,000 service centers located at."

www.abc.com/24K-10 Aug 03. Cached - Similar pages"

Even though your site, xyz.com, may show up first, chances are potential visitors will click on the link to Abc.com, only because the title tag matched what was searched for. In an age where internet users estimate relevance based on a quick scan of content on search engines and web sites, the wrong title tag could kill your chances of getting hits.

Size matters

Considering that India is still to achieve the mass adoption of broadband, sites with large file sizes for pages are frowned upon. When you get data transferred to you at around 5 KBps, the average 100 KB page takes 20 seconds to load. This is already too long in terms of visitors' patience, and the only reason most sites get away with is because here in India we're used to waiting for pages to load.

However, if you have a site that has a 500 KB start page, this translates to a minimum of 100 seconds of waiting for the page to load. On an average, try to keep pages as small as possible -- less than 100 KB -- by using fewer images and other unnecessary design elements. This will ensure that users have a better experience at your site.

Flash designs

Although the use of Macromedia Flash has revolutionised the way content can be stored on your site and displayed to visitors, it is another bandwidth hog. Since Flash is generally used to compress videos or animations to display on web pages, the file sizes are always large. Even simple button animations can add a few kilobytes here and there, which can total up to a lot.

You also need to remember that text almost always loads before graphics, and most users will scan through whatever appears on your site first and decide whether the content is what they are looking for. Even though you may have relevant content neatly displayed using a fancy Flash links menu, many users may not stick around long enough to see this.

Gaudy colours

Some websites give you the idea that the people who designed them are colour blind! The net is jam packed with sites that use vivid reds, pinks and purples that distract rather than attract. There isn't much to explain here, with millions of colour combinations available it's purely up to you to decide what combinations are just not right.

Browser support

Most designers don't seem to care about whether their sites work on browsers other than Internet Explorer (IE). In fact, it's not just site design you should worry about, you should also choose a hosting solution that is compatible with most browsers. For example, Microsoft's ASP technology is largely targeted at IE, and browsers such as Opera and Mozilla Firefox often run into trouble with some ASP pages.

The best way to check the functionality of your site is to check it using the most popular browsers. We suggest you start with Lynx, the popular text-based browser in Linux, to see what your site will most probably look like to a search engine bot, and then work your way through IE, Mozilla Firefox, Opera, and others.

Balanced content

Sites with only text, or even those with only graphics, can be very irritating to view. Even if your site has only text, try and make the text layout look good.

You should never substitute text with graphics, such as making an animated GIF image with changing text. Doing this only increases the size of a page, and also takes away people's ability to resize the text on the page according to their needs. It also almost always results in a low-resolution image that shows very poor quality text.

A good rule of thumb is the ratio of 80:20 -- 80 per cent text to 20 per cent images. This means that when you look at the pages you design, no more than 20 per cent of the screen area should be images.

You also need to remember that websites have depth, so there's no reason to try and cram everything you offer into your homepage. Divide your content into little piles, study your existing as well as target audiences, and then tag your content in decreasing order of importance or interest to this audience.

The top 50 per cent of your content should be easily accessible from your homepage, and the rest can either be distributed lower down on your homepage or stored in lower levels (different pages) of your site.

Navigation

Very often, you come across sites that lead you to a page best described as a virtual dead-end. Somehow, visitors might get stuck at a place, and have no clue as to how they got there. This usually results in a visitor clicking on the little [X] on the top right corner, banishing your site from their screen forever! The entire reason for taking pains to design sites is to avoid this from ever happening -- yet it does!

So what causes this problem? Simple. Bad design!

All the pages your site contains should at least have links to your homepage and major sections. The easiest way to do this would be to have a constant navigation bar that has a fixed position on every page of your site. A good rule to follow is the 'three-click rule': no page in your site should be more than three clicks away from any other page.

Being over-creative

Sometimes, designers have delusions of grandeur, and think they would make good copywriters or visualisers in advertising agencies. This leads to sites that are cryptic to the common man. Such sites might be acceptable if the business or individual is attempting to show off their creative prowess -- as might be the case with an actual copywriter or advertising agency, or perhaps an artist's homepage. However, most often, such sites are a big no-no for the majority.

Outdated pages

Nothing is worse than having a site that has outdated content. Sites that contain content on the homepage that was last updated over a couple of months ago are often considered 'neglected', and are ignored by visitors. This holds true especially if you add a 'News' section to your site and start updating it regularly and then somewhere along the line, falter and stop updating it.

No resizing

Web pages that cannot be resized are a big no-no. Often, people browse websites without maximising the window. If half your content is not visible because your site does not allow for resizing windows, you have a problem on your hands. This also holds true for text resizing: nothing is more irritating than text on a page that visitors cannot increase or decrease the size of by using their browser's text resize option.

Moving text and images

Scrolling text marquees are ugly! They are not considered good design any more, and you should avoid putting in scrolling text as far as possible. The only form in which scrolling text is acceptable is as a ticker on your site, such as a stock quote ticker, news ticker, or Shout box, where visitors can leave a message for other visitors. Basically, only if you need to dedicate a very limited space to content that changes rapidly, or is input by visitors, should you consider scrolling text.

Animated GIF images, too, are now considered amateurish. Most often, only jokes and ads contain GIF animation. Web users are accustomed to seeing GIF animations as advertisements, and have learnt to ignore them at sight. This means that all the effort you put into animating an image will be wasted on the majority of visitors.

If you have to include moving pictures, make sure the animation is done well and flows smoothly, otherwise it's likely that you'll make your site look unprofessional.

Wednesday, September 12, 2007

Online Marketing

A McKinsey survey of marketing executives from around the world shows that in marketing, things are starting to change: companies are moving online across the spectrum of marketing activities, from building awareness to after-sales service, and they see online tools as an important and effective component of their marketing strategies. In 2010 respondents expect a majority of their customers to discover new products or services online and a third to purchase goods there. A majority of the respondents also expect their companies to be getting 10 percent or more of their sales from online channels in 2010—twice as many companies as have hit that mark today.

In addition to established online tools such as e-mail, information-rich Web sites, and display advertising, survey respondents show a lot of interest in the interactive and collaborative technologies collectively known as Web 2.0for advertising, product development, and customer service.

What are emerging vehicles?

Blogs (short for Web logs) are online journals or diaries hosted on a Web site.

Online games include both games played on dedicated game consoles that can be networked and “massively multiplayer” games, which involve thousands of people who interact simultaneously through personal avatars in online worlds that exist independently of any single player’s activity.

Podcasts are audio or video recordings—a multimedia form of a blog or other content. They are often distributed tharough aggregators, such as iTunes.

Social networks allow members of specific sites to learn about other members’ skills, talents, knowledge, or preferences. Commercial examples include Facebook and MySpace. Some companies use such systems internally to help identify experts.

Virtual worlds, such as Second Life, are highly social, three-dimensional online environments shaped by users who interact with and receive instant feedback from other users through the use of avatars.

Web services are software systems that make it easier for different systems to communicate with each other automatically to pass information or conduct transactions. A retailer and supplier, for example, might use Web services to communicate over the public Internet and automatically update each other’s inventory systems.

Widgets are programs that allow access from users’ desktops to Web-based content.

Wikis, such as Wikipedia, are systems for collaborative publishing. They allow many authors to contribute to an online document or discussion.

In four of the five major areas of marketing, a majority of executives—83 percent for service management and, even at the low end, 44 percent for pricing—say that online tools are at least somewhat important for companies in their industries. At least two-thirds of companies are using these tools in all the areas they deem most important.

The importance of these tools naturally varies among industries—for instance, 65 percent of the respondents in high tech say that advertising online is very or extremely important for them, compared with just 39 percent in manufacturing. There are also two other likely reasons for the relatively low use of online tools: a lack of capabilities to manage them3 and the fact that access to high-speed Internet connections (required for many of these tools) is uneven (just under half of Europeans have it, for example, compared with 59 percent of the US population).

Although marketers expect to rely increasingly on digital-advertising vehicles, they recognize barriers that could slow the adoption of these tools. The lack of sufficient capabilities at companies or their agencies is the most significant concern, for both those that are advertising and those that aren’t (Exhibit 6); among online advertisers, for example, about 60 percent of responses indicate that insufficient capabilities are a barrier. Even among respondents at companies that frequently use online tools for all marketing purposes, a full 50 percent of responses highlight capability barriers to advertising. Other McKinsey research shows that a lack of online capabilities extends far beyond the marketing department: 42 percent of the respondents to another global survey said that investing more in the capabilities of their companies would have made initial investments in Internet technologies more effective.

Thursday, August 30, 2007

Security Metrics

The pressure is on. Various surveys indicate that over the past several years computer
security has risen in priority for many organizations. Spending on IT security has
increased significantly in certain sectors -– four-fold since 2001 within the federal
government alone.1 As with most concerns that achieve high priority status with
executives, computer security is increasingly becoming a focal point not only for
investment, but also for scrutiny of return on that investment. In the face of regular,
high-profile news reports of serious security breaches, security managers are more than
ever before being held accountable for demonstrating effectiveness of their security
programs.
What means should managers be using to meet this challenge? Some experts believe
that key among these should be security metrics.2 This guide provides a definition of
security metrics, explains their value, discusses the difficulties in generating them, and
suggests a methodology for building a security metrics program.

Definition of Security Metrics
It helps to understand what metrics are by drawing a distinction between metrics and
measurements. Measurements provide single-point-in-time views of specific, discrete
factors, while metrics are derived by comparing to a predetermined baseline two or
more measurements taken over time.3 Measurements are generated by counting;
metrics are generated from analysis.4 In other words, measurements are objective raw
data and metrics are either objective or subjective human interpretations of those data.
Good metrics are those that are SMART, i.e. specific, measurable, attainable, repeatable,
and time-dependent, according to George Jelen of the International Systems Security
Engineering Association.5 Truly useful metrics indicate the degree to which security
goals, such as data confidentiality, are being met, and they drive actions taken to
improve an organization’s overall security program.

A Good Metric Must:
1. Be consistently measured. The criteria must be objective and repeatable.
2. Be cheap to gather. Using automated tools (such as scanning software or
password crackers) helps.
3. Contain units of measure. Time, dollars or some numerical scale should be included—not just, say, "green," "yellow" or "red" risks.
4. Be expressed as a number. Give the results as a percentage, ratio or some other kind of actual measurement. Don't give subjective opinions such as "low risk" or "high priority."
Source: Andrew Jaquith
A Good Visualization of Metrics Will:
Not be oversimplified. Executives can handle complex data if it's presented clearly.
At the same time, not be ornate. Gratuitous pictures, 3-D bars, florid design and noise around the data diminish effectiveness.
Use a consistent scale. Switching scales within a single graphic presentation makes it confusing or suggests you're trying to bend the facts.
Include a comparison to a benchmark, where applicable. "You are here" or "The industry is here" is often a simple but informative comparative element to add.

By no means does Jaquith (or CSO for that matter) think these five metrics are the final word on infosecurity. Quite the contrary, they're a starting point, relatively easy to ascertain and hopefully smart enough to get CISOs thinking about finding other metrics like these, out in the vast fields of data, waiting to be reaped.

Metric 1: Baseline Defenses Coverage (Antivirus, Antispyware, Firewall, and so on)

This is a measurement of how well you are protecting your enterprise against the most basic information security threats. Your coverage of devices by these security tools should be in the range of 94 percent to 98 percent. Less than 90 percent coverage may be cause for concern. You can repeat the network scan at regular intervals to see if coverage is slipping or holding steady. If in one quarter you've got 96 percent antivirus coverage, and it's 91 percent two quarters later, you may need more formalized protocols for introducing devices to the network or a better way to introduce defenses to devices. In some cases, a drop may stir you to think about working with IT to centralize and unify the process by which devices and security software are introduced to the network. An added benefit: By looking at security coverage, you're also auditing your network and most likely discovering devices the network doesn't know about. "At any given time, your network management software doesn't know about 30 percent of the IP addresses on your network," says Jaquith, because either they were brought online ad hoc or they're transient.
How to get it: Run network scans and canvass departments to find as many devices and their network IP addresses as you can. Then check those devices' IP addresses against the IP addresses in the log files of your antivirus, antispyware, IDS, firewall and other security products to find out how many IP addresses aren't covered by your basic defenses.
Expressed as: Usually a percentage. (For example, 88 percent coverage of devices by antivirus software, 71 percent coverage of devices by antispyware and so forth.)
Not good for: Shouldn't be used for answering the question "How secure am I?" Maximum coverage, while an important baseline, is too narrow in scope to give any sort of overall idea of your security profile. Also, probably not yet ready to include cell phones, BlackBerrys and other personal devices, because those devices are often transient and not always the property of the company, even if they connect to the company.
Try these advanced versions: You can parse coverage percentages according to several secondary variables. For example, percentage coverage by class of device (for instance, 98 percent antivirus coverage of desktops, 87 percent of servers) or by business unit or geography (for instance, 92 percent antispyware coverage of desktops in operations, 83 percent of desktops in marketing) will help uncover tendencies of certain types of infrastructure, people or offices to miss security coverage. In addition, it's a good idea to add a time variable: Average age of antivirus definitions (or antispyware or firewall rules and so on). That is, 98 percent antivirus coverage of manufacturing servers is useless if the average age of the virus definitions on manufacturing's servers is 335 days. A star company, Jaquith says, will have 95 percent of their desktops covered by antivirus software with virus definitions less than three days old.
One possible visualization: Baseline defenses can be effectively presented with a "you are here" (YAH) graphic. A YAH needs a benchmark—in this case it's the company's overall coverage. After that, a business unit, geography or other variable can be plotted against the benchmark. This creates an easy-to-see graph of who or what is close to "normal" and will suggest where most attention needs to go. YAHs are an essential benchmarking tool. The word "you" should appear many times on one graphic. Remember, executives aren't scared of complexity as long as it's clear. Here's an example: plotting the percentages of five business units' antivirus and antispyware coverage and the time of their last update against a companywide benchmark.

Metric 2: Patch Latency

Patch latency is the time between a patch's release and your successful deployment of that patch. This is an indicator of a company's patching discipline and ability to react to exploits, "especially in widely distributed companies with many business units," according to Jaquith. As with basic coverage metrics, patch latency stats may show machines with lots of missing patches or machines with outdated patches, which might point to the need for centralized patch management or process improvements. At any rate, through accurate patch latency mapping, you can discover the proverbial low-hanging fruit by identifying the machines that might be the most vulnerable to attack.
How to get it: Run a patch management scan on all devices to discover which patches are missing from each machine. Cross-reference those missing patches with a patch clearinghouse service and obtain data on 1. the criticality of each missing patch and 2. when the patches were introduced, to determine how long each missing patch has been available.
Expressed as: Averages. (For example, servers averaged four missing patches per machine. Missing patches on desktops were on average 25 days old.)
Not good for: Companies in the middle of regression testing of patch packages, such as the ones Microsoft releases one Tuesday every month. You should wait to measure patch latency until after regression testing is done and take into account the time testing requires when plotting the information. The metrics might also get skewed by mission-critical systems that have low exposure to the outside world and run so well that you don't patch them for fear of disrupting ops. "There are lots of systems not really open to attack where you say, ‘It runs, don't touch it,'" says Jaquith. "You'll have to make a value judgment [on patch latency] in those cases."
Try these advanced metrics: As with baseline coverage, you can analyze patch latency by business unit, geography or class of device. Another interesting way to look at patch latency statistics is to match your average latency to the average latency of exploits. Say your production servers average 36 days on missing patches' latency, but similar exploits were launched an average of 22 days after a patch was made available. Well, then you have a problem. One other potentially useful way to approach patch latency is to map a patch to its percent coverage over time. Take any important patch and determine its coverage across your network after one day, three days, five days, 10 days and so on.
One possible visualization: For data where you can sum up the results, such as total number of missing patches, a "small multiples" graphic works well. With small multiples you present the overall findings (the whole) as a bar to the left. To the right, you place bars that are pieces making up the whole bar on the left. This presentation will downplay the overall findings in favor of the individual pieces. One key in small multiples graphing is to keep the scale consistent between the whole and the parts. This example plots total number of missing patches for the top and bottom quartiles of devices (the best and worst performers). Then it breaks down by business unit who's contributing to the missing patches.

Metric 3: Password Strength
This metric offers simple risk reduction by sifting out bad passwords and making them harder to break, and finding potential weak spots where key systems use default passwords. Password cracking can also be a powerful demonstration tool with executives who themselves have weak passwords. By demonstrating to them in person how quickly you can break their password, you will improve your lines of communication with them and their understanding of your role.
How to get it: Using commonly available password cracking programs, attempt to break into systems with weak passwords. Go about this methodically, first attacking desktops, then servers or admin systems. Or go by business unit. You should classify your devices and spend more time attempting to break the passwords to the more important systems. "If it's a game of capture the flag," Jaquith says, "the flag is with the domain controller, so you want stronger access control there, obviously."
Expressed as: Length of time or average length of time required to break passwords. (For example, admin systems averaged 12 hours to crack.) Can be combined with a percentage for a workgroup view (for example, 20 percent of accounts in business unit cracked in less than 10 minutes). Is your password subject to a lunchtime attack? That is, can it be cracked in the 45 minutes you are away from your desk to nosh?
Not good for: User admonishment, judgment. The point of this exercise is not to punish offending users, but to improve your security. Skip the public floggings and just quietly make sure employees stop using their mother's maiden name for access.
Try this: Use password cracking as an awareness-program audit tool. Set up two groups (maybe business units). Give one group password training. The other group is a control; it doesn't get training. After several months and password resets, try to crack the passwords in both groups to see if the training led to better passwords.
One possible visualization: Both YAH and small multiples graphics could work with this metric. (See the graphics for Metric 1 and Metric 2.)

Metric 4: Platform Compliance Scores

Widely available tools, such as the Center for Internet Security (CIS) scoring toolset, can run tests against systems to find out if your hardware meets best-practice standards such as those set by CIS. The software tools take minutes to run, and test such things as whether ports are left unnecessarily open, machines are indiscriminately shared, default permissions are left on, and other basic but often overlooked security lapses. The scoring system is usually simple, and given how quickly the assessments run, CISOs can in short order get a good picture of how "hardened" their hardware is by business unit, by location or by any other variable they please.
Expressed as: Usually a score from 0 to 10, with 10 being the best. Best-in-class, hardened workstations score a 9 or a 10, according to Jaquith. He says this metric is far more rigorous than standard questionnaires that ask if you're using antivirus software or not. "I ran the benchmark against the default build of a machine with Windows XP Service Pack 2, a personal firewall and antivirus protection, and it scored a zero!" Jaquith notes.
Not good for: Auditing, comprehensive risk assessment or penetration testing. While a benchmark like this may be used to support those advanced security functions, it shouldn't replace them. But if you conduct a penetration test after you've benchmarked yourself, chances are the pen test will go more smoothly.
Try this: Use benchmarking in hardware procurement or integration services negotiations, demanding configurations that meet some minimum score. Also demand baseline scores from partners or others who connect to your network.
One possible visualization: An overall score here is simple to do: It's a number between 1 and 10. To supplement that, consider a tree map. Tree maps use color and space in a field to show "hot spots" and "cool spots" in your data. They are not meant for precision; rather they're a streamlined way to present complex data. They're "moody." They give you a feel for where your problems are most intense. In the case of platform-compliance scores, for instance, you could map the different elements of your benchmark test and assign each element a color based on how risky it is and a size based on how often it was left exposed. Be warned, tree maps are not easy to do. But when done right, they can have instant visual impact.

Metric 5: Legitimate E-Mail Traffic Analysis

Legitimate e-mail traffic analysis is a family of metrics including incoming and outgoing traffic volume, incoming and outgoing traffic size, and traffic flow between your company and others. There are any number of ways to parse this data; mapping the communication flow between your company and your competitors may alert you to an employee divulging intellectual property, for example. The fascination to this point has been with comparing the amount of good and junk e-mail that companies are receiving (typically it's about 20 percent good and 80 percent junk). Such metrics can be disturbing, but Jaquith argues they're also relatively useless. By monitoring legitimate e-mail flow over time, you can learn where to set alarm points. At least one financial services company has benchmarked its e-mail flow to the point that it knows to flag traffic when e-mail size exceeds several megabytes and when a certain number go out in a certain span of time.
How to get it: First shed all the spam and other junk e-mail from the population of e-mails that you intend to analyze. Then parse the legitimate e-mails every which way you can.
Not good for: Employee monitoring. Content surveillance is a different beast. In certain cases you may flag questionable content or monitor for it, if there's a previous reason to do this, but traffic analysis metrics aren't concerned with content except as it's related to the size of e-mails. A spike in large e-mails leaving the company and flowing to competitors may signal IP theft.
Added benefit: An investigations group can watch e-mail flow during an open investigation, say, when IP theft is suspected.
Try this: Monitor legitimate e-mail flow over time. CISOs can actually begin to predict the size and shape of spikes in traffic flow by correlating them with events such as an earnings conference call. You can also mine data after unexpected events to see how they affect traffic and then alter security plans to best address those changes in e-mail flow.
One possible visualization: Traffic analysis is suited well to a time series graphic. Time series simply means that the X axis delineates some unit of time over which something happens. In this case, you could map the number of e-mails sent and their average size (by varying the thickness of your bar) over, say, three months. As with any time line, explain spikes, dips or other aberrations with events that correlate to them.
Metric 6: Application Risk Index
How to get it: Build a risk indexing tool to measure risks in your top business applications. The tool should ask questions about the risks in the application, with certain answers corresponding to a certain risk value. Those risks are added together to create an overall risk score.
Expressed as: A score, or temperature, or other scale for which the higher the number, the higher the exposure to risk. Could also be a series of scores for different areas of risk (for example, business impact score of 10 out of 16, compliance score of 3 out of 16, and other risks score of 7 out of 16).
Industry benchmark: None exist. Even though the scores will be based on observable facts about your applications (such as, is it customer facing? Does it include identity management? Is it subject to regulatory review?). This is the most subjective metric on the list, because you or someone else puts the initial values on the risks in the survey instrument. For example, it might be a fact that your application is customer-facing, but does that merit two risk points or four?
Good for: Prioritizing your plans for reducing risk in key applications—homegrown or commercial. By scoring all of your top applications with a consistent set of criteria, you’ll be able to see where the most risk lies and make decisions on what risks to mitigate.
Not good for: Actuarial or legal action. The point of this exercise is for internal use only as a way to gauge your risks, but the results are probably not scientific enough to help set insurance rates or defend yourself in court.
Added benefit: A simple index like this is a good way to introduce risk analysis into information security (if it’s not already used) because it follows the principles of risk management without getting too deeply into statistics.
Try this: With your industry consortia, set up an industrywide group to use the same scorecard and create industrywide application risk benchmarks to share (confidentially, of course). One industry can reduce risk for everyone in the sector by comparing risk profiles on similar tools. (Everyone in retail, for example, uses retail point-of-sale systems and faces similar application risks.)
One possible visualization: Two-by-two grids could be used here to map your applications and help suggest a course of action. Two-by-twos break risk and impact into four quadrants: low risk/low impact, low risk/high impact, high risk/low impact, high risk/high impact. A good way to use these familiar boxes is to label each box with a course of action and then plot your data in the boxes. What you’re doing is facilitating decision-making by constraining the number of possible courses of action to four. If you need to get things done, use two-by-two grids to push executives into decision making.