eSites Network Website Design

WHAT'S NEW?
Loading...

Hidden costs of cloud computing

Cloud, or more aptly, cloud computing has come a long way since 2006 when Google's Eric Schmidt popularised it for the first time while describing his company's approach to software-as-a-service or SaaS.

Today there is a whole new, or refurbished, industry worth $100 billion that has been built around cloud computing, touted as the next era of IT. On one side the hype around cloud is fading even as more and more enterprises are adopting it, on the other, there are some who have realised that not everything is as rosy as was idolised.

While early adopters such as Netflix and several others tasted success by making the first move to cloud, several others, especially from the enterprise community, ended up being disappointed because they achieved neither cost effectiveness nor less capital expenditure (capex), despite going for an operational expenditure (opex) model. There are several factors which led to such situations. Enterprises tend to oversee a lot of factors or end up making late realisations in terms of what to invest where. Let us see some of such unexpected cost, or hidden costs, which call for additional investment.

1. Opex need not be always the best

The first 'hidden cost' is incurred by believing cloud vendors who claim opex (operational expenditure), and not capex (capital expenditure), is the right way of doing business and that pay-per-usage is the way to go.

People should be careful when someone claims that capex is wrong and opex is the right way of doing IT budgets. Budgets should be a choice made by chief financial officer of an enterprise and not by any cloud vendor. If a company has lots of cash in hand, it can go for opex. Whereas, if a company buys it based on annual budget, such as those in the public sector, it will be difficult for them to finance opex.

If you have been repeatedly told that cost effectiveness is what cloud is all about, think again, because adoption of cloud from a cost perspective is not going to be cheaper option and there are several aspects that need to be considered so as to to optimise cost and other related aspects in an enterprise while on cloud.

In terms of SaaS, the hidden costs fall into three areas. The first is customization, the more you can use SaaS solution as it was designed the lower your costs. Customizations can quickly lead to development and maintenance costs you did not anticipate. This is the most widely made error by enterprises. It is more cost effective to teach your employees to use the SaaS as it is designed than to try to bend it to your processes. This isn't always possible but should be used as a rule of thumb. The second is integration. You will inevitably integrate SaaS services with in-house applications, data stores and other SaaS services. These integrations must be built, managed and maintained. Best practice is to define a clear integration architecture via as few means as possible.

The third area is sprawl because an enterprise buys SaaS for says 15 employees, but when it opens the same to 1,500 employees, suddenly $99 per user could be more than an in-house solution. On the other hand buying an application or product than renting one is as good as buying a house than going for the rent mode.

One can do an upfront investment on storage and network today and own that particular technology. Whereas, in an opex model where you pay a particular amount per month, instead of investing lakhs of rupees upfront in buying that particular technology, they do not own any software, but the data. And, still they have to go through service level agreements (SLAs), tax, interest, and other kind of costs and investment. So, even five years down the line, they do not own anything, which in turn could have resulted in spending more money.

In a high interest country like India, however, opex investment will be more suitable as it frees up capital, which in turn can be utilised for other projects.

2. Pay-as-you-go is not a magic

There are times when CIOs do not realise how much are they shelling since credit card is the usual mode of payment for cloud service.

In public cloud setting, the hidden cost are more related to the fact that usually CIOs are not aware how much of public cloud is being used, because they use company credit cards, which is also used by their developers who develop application on cloud. So, at the end of the day when you combine all these expenditures, the amount is quite significant. Moreover, CIOs do not realise how much will cloud resources cost because cost structure of these resources is quite complex. In cloud, computing, processing, storage, network bandwidth, all of these can add up as expenditure.

There are other aspects as well which can add to the extra costs, such as:
Not activating cloud economics. Not every application is a fit with a pay-per-use platform. The best fit are those that take advantage of the pricing model through either elastic scale or transiency. Elastic scale means the app increases or decreases its resource consumption based on use. Best fit are apps that do this as granularly as possible. Transient apps are those that are not active all the time and can be parked or completely shut off when not in use. Batch work, high performance computing, seasonal or cyclical applications are all good examples. An app that just sits there 24/7 consuming the same resources is usually a bad fit and should be moved either back into your data center or to traditional hosting.

3. Cloud is not just a technology, but also a service

The second 'hidden cost' could come in the guise of the misconception that cloud is first and foremost a technology, rather than a service. Enterprises thus tend to ignore investment required on people and processes in order to make them cloud ready. Technology is just one of the aspects and out of Gartner's 10 parameters of cloud requirements, it features as the eighth one.

Cloud is actually a service and not a technology. Cloud has other aspects like people and process. So when you are looking from a service' perspective, irrespective of whether it is a private, public, hybrid or multiple cloud, process is one of the most important thing which has to be in place.

The robustness and maturity of processes will decide how beneficial or viable cloud services can be to an enterprise. However, in the current scenario, robustness of enterprise processes is very low and in a bid to improve it they end up spending more.

In a scale of one to five, the maturity of processes in Indian enterprises is below two and Gartner believes that it should be at least three for cloud to be beneficial. So a lot of money goes in streamlining processes, implementing best practices, getting right certification, implementing basic stuff such as service portfolio or management or catalogue. And, it may also involve training people in the new processes, getting them certified, maintaining a new process management tool.

On the other hand, a cloud-ready employee will be a lot different from the traditional hardware management, or power and cooling management, or Java or .Net expert. Instead, all these will all be handled by a 'T-Shirt employee', who will do multiple things, including vendor management, SLA management, contract negotiation. So they need to be given additional training.

4. What to invest where

The third 'hidden cost' comes from the lack of transparency in IT budget allocation and understanding as to what enterprises have and what more do they seek from cloud.

In public cloud you do not have any upfront investment, whereas in private clouds you have. What you should understand is to what extent will the new product on cloud be more cost effective than what you currently have. The problem is that many companies do not have good understanding of exactly what costs what, and consumes what sort of resources in the company. Companies have annualised budget, but when it comes to allocation - in terms of which applications are costly, which are the businesses lines that rely a lot on IT resources - things turn out be difficult.

Cloud is also about scale, availability, and uptime; however, there is a hidden element in it, which is downtime. Even a one percent can turn out be a huge expense.

When looking at cloud solutions, there are a few factors that decision makers need to think about. Some of the big ones really concern availability (uptime) and security. Availability really refers to how much uptime the cloud provider will guarantee. Generally, top providers will guarantee almost 99 per cent availability, with 1% of downtime a year. This downtime can be due to external factors such as acts of God (natural disasters, power cuts). Decision makers need to assess the cost to their business of a potential downtime.

Journey to cloud is not a one-day affair. It needs a lot of preparations in terms of bandwidth, network capability, etc so that an enterprise is ready to do its job on cloud. And, this upfront investment is something that you would have not anticipated at the onset of the journey.

It is not very easy when you say you are going from an in-house solution to a cloud based solution. If you are going to move to a cloud-based solution you may have to do a lot of investments that you may call the upfront investments. You may have to increase bandwidth, improve networking capabilities, standardise certain technologies, platforms, buy products such as cloud platform or cloud management solutions, integrate several new systems that may not be so easy to integrate, work with service providers or system integrators who will negotiate on your behalf with multiple cloud. So there is a lot of cost involved in it.

5. Security comes at a cost

Cloud vendors harp a lot on how secure data will be on cloud as against a traditional environment. However, if you are not aware of what security means in different contexts then you are up for surprises. And that makes for our fourth 'hidden cost'.

This is especially true in countries where data protection legislation needs to meet certain criteria. For example in Europe, data protection requires that personal and sensitive data be stored in a highly-secure manner. Other countries typically have less stringent regulatory requirements, so European decision makers need to be sure that they do not fall foul of European compliance by storing data abroad. If a breach occurs, or compliance is not met, companies could face serious fines. In the UK for example, both the Financial Services Authority and the Information Commissioners Office can levy substantial fines for data protection breach. This could cost decision makers a lot of money.

6. Public cloud is not always cheap: Keep a limit

The eighth hidden cost come out of the myth that public cloud is cheap. Things as simple as not turning off systems on cloud, opening up applications for large number of users and several other factors determine the cost to a large extent.

People tend to believe so because most of the time, users are not good at math and the rest of the time they do not maximise the use of cloud resources. In certain cases they create a server on a public cloud and when they are done with, instead of deleting it just leave it as it is and thus keep paying for it even though they are no longer using it. Sometimes, they use a server which is too big for what they are doing.

On the cloud platform front, services tend to have a pay-per-use model that can be positively be affected by application behavior rather than use pattern. Thus the hidden costs to avoid are not turning things off. It is easy to see how pay per use makes your start-up costs low and elastic scaling as traffic rises easy. However, it is just as easy to not pay attention to application use/load patterns when they go the other way. This is where you can save tremendous money, by turning off resources that are no longer needed. Another aspect is that storage grows, and it never shrinks. On a pay-per-use service you are constantly reminded of this, which means you need to actively manage your storage consumption by moving data to lower cost services when they are no longer in constant use, leveraging caching as much as possible and deleting files or copies of files if you don't need them.

7. Not everybody is a cloud provider

Today anyone and everyone is a cloud provider. So it is all the more important to keep vigil and not to fall into the traps of some bogus vendor. This is our last and ninth 'hidden cost'.

It is important to consider the viability of a certain provider. Megaupload is a good example. Although not strictly speaking a cloud provider, it did provide cyberlocker services. It's spectacular shutdown by law enforcement last year put many small businesses in trouble. Unable to access their files, many lost business sales and revenue. The cost of having to go to court and get the data released is heavy, and may not always be successful.

So as we saw there are a lots of ways in which CIOs can be taken unawares when it comes to cloud and its associated cost. However, that does not mean that cloud is not good, just keep in the mind that there is lot more to it, than just what vendors try to project, and what meets the eyes or ears.

UTMs Vs Traditional Security: What's Better?

We try to uncomplicate the answer to this key question by elaborating on the key features you must not compromise on while purchasing a UTM (Unified Threat Management), how a UTM can benefit an SME in the long run, and also how security maintenance and licensing requirements get simplified through its use.

Unified Threat Management as the name suggests is for those who want a one stop solution for ease of management. This gateway level security solution comprises of features like anti-spam, anti-virus, intrusion detection/prevention, firewall, bandwidth management, VPN, centralised management and reporting. With multiple vendors offering UTM solutions in addition to open source options, there is increasing affinity among vendors to provide product differentiation by adding new features to their product line.

Below we have listed down some must-have features keeping the future of IT security in mind:

Fast processing speed

Most of the UTM vendors sell their product as appliance, a combination of optimized software and hardware. Now with too much pressure on these gateway devices as they have to inspect every packet that goes through them, UTM appliance itself can become performance bottleneck. To enhance performance of appliance vendors are going for multi core processors and utilizing this multicore capability by developing multi threaded UTM operating system.

Gigabit throughput

Though we are still far away from time when we would use Gigabit Internet, it would be better to invest in infrastructure that is capable to handling such speeds as these purchases are not made every year.

User level authentication

Though IP and MAC based filtering in firewall is still common with concepts like BYOD along with addition of new computing devices (smart phones and tablets) into organizational environment providing fool proof security based on IP and MAC is becoming difficult and at times impossible. Here is a brief story of what our IT team highlighted while testing pilot NComputing deployment. As single machine with single IP is shared by multiple users in NComputing IP based firewall became irrelevant. To overcome these very practical issues it is recommended to go for UTM device capable of authenticating user than IP/MAC.

Application Firewall

Application firewalls are capable of blocking particular application and leaving others, this is yet another must have feature in your UTM. Now there are number of P2P applications that are bugging network admins for years, with application firewall blocking them is quite simple, on similar lines other applications with high perceived risk can be better managed with this feature.

Support for both IPSec and SSL VPN

Secure connection to remote location is must these days as increasing number of people prefer working from home to better manage their private life without hampering their professional one. VPN has been technology of choice to enable this very setup, therefore next time when you go for UTM make sure that it supports both client based IPSec and non client based SSL VPN. With increasing popularity of SSL VPN having this feature is must for future usability.

Support for 3G/4G and WAN failover

To give additional Internet failover functionality besides the existing inbuilt WAN failover mechanism, UTM these days also support wireless Internet technologies like 3G/4G etc. Having this additional failover mechanism in place means almost zero downtime even if wired network is down, just plug in 3G capable dongle into your UTM and have additional piece of mind.

How a UTM Simplifies Security Management

There has always been a debate between Unified Threat Management vs the best-of-breed approach. Traditionally, organisations use a point solution to protect themselves against each type of threat. Such standalone, or “best-of-breed”, security strategy often consumes huge amounts of money, resource and management time. Disparate security devices and operating systems come with multiple maintenance and support contracts, multiple upgrade and replacement schedules, multiple licensing obligations, multiple training programs and management resources. All of these add to the cost and complexity of an organisation's security infrastructure, and can have serious negative impact on up-time, availability and performance.
Since firms are now realizing the disadvantages, they are migrating to consolidated security platforms or UTM devices to reduce network complexity and switch Capex to Opex.

How a UTM Scores Over Traditional Security Solutions

UTM technology has several advantages including ease of deployment, use and management; flexibility (the ability to turn on whatever security functionality you need whenever you need it); and high ROI (a single UTM appliance is typically way more cost-effective than several standalone solutions). The fact that the various security functionalities within a UTM appliance is produced by one vendor typically also means better integration and coverage between these technologies. SMEs have been more keen to adopt UTM solutions than large enterprises but the situation has changed significantly in the last few years. With better education and awareness of integrated threat technologies, enterprises now realize that UTMs are not rudimentary or “short-cut” solutions targeting small organizations with few IT resources. More of them now understand that today's advanced UTMs perform better than single-point solutions, and can cover the gaps left unattended by traditional standalone solutions.

While some SMEs in India are still content to just have software protecting their organisations, they need to realise that these do not offer the performance of UTMs and are not able to cover the full spectrum of threats that UTMs can tackle.

The Case for UTMs in SMEs

For SMEs, there are no factors discouraging them from embracing UTMs per se once the benefits are properly explained to them. Some smaller enterprises, however, have significant constraints on technology budgets - some of them still think of IT as an expense rather than an investment. Thankfully, this psyche of small organizations is changing and those adopting IT solutions are embracing solutions that provide cost benefits to them, and UTMs are definitely such devices in the security category. The commoditization of network security is also helping SMEs in this regard. Rather than investing heavily on network security solutions, smaller firms can now have a subscription based model to implement network security on their premises through a managed service provider.

This has allowed organizations to have their IT spend on an Opex rather than Capex model. Going forward, we expect such managed services to become more readily available, thus giving more SMBs the ability to access the same levels of security traditionally enjoyed by large enterprises.

Traditionally, organisations use a point solution to protect themselves against each type of threat. Such standalone, or “best-of-breed”, security strategy often consumes huge amounts of money, resource and management time. Disparate security devices and operating systems come with multiple maintenance and support contracts, multiple upgrade and replacement schedules, multiple licensing obligations, multiple training programs and management resources. All of these add to the cost and complexity of an organisation's security infrastructure, and can have serious negative impact on up-time, availability and performance. Since firms are now realizing the disadvantages, they are migrating to consolidated security platforms or UTM devices to reduce network complexity and switch Capex to Opex.

Digital Seduction: Why customers buy your product

Economists at some point decided that consumers make informed product purchases: A good balance between price and quality.
For decades, however, this view is falling apart, as consumers’ decisions are not rational. In my opinion this explains the differences in conversion between online and offline stores.

Psychologists have been for decades sawing off the branch the rational economic man has been sitting on: the man who maximizes his utility over all possible choices. The have shown very convincingly that we not only have a rational system that steers our decisions (System 2), but also, perhaps more importantly, evolutionary system (System 1).

This latter system ensures that we go along with the crowd, without figuring out if this is a good idea. It ensures that we listen to people with a lab coat, without checking whether they really are experts, and why we are very bad at estimating or assessing bets.

Despite the importance of psychology in our purchasing decisions, its use remains an underdeveloped element of online commerce compared to offline commerce. Offline we have good salesmen, people who know how other people react and how you can affect them.

Online we reach more potential customers. We get them to our websites, but we have forgotten how we can convert them.

To reach people or to influence people?

Due to the shift from offline sales to online sales, we increased our reach. A good salesperson could talk with up to forty people in one day, a good website reaches millions of people. At the same time, our impact has decreased: the probability that an interaction actually leads to a sale has reduced dramatically.

Of course we are already working hard to increase our impact. We make sure we have the right products for people (recommender systems) and we ensure that we reach the right people: those who want to buy (SEO and SEA).

But we forget one thing that a good salesman does very well: sell the product in the correct manner. A good salesperson can sell anything. Not because he knows a lot about the product, but because he, perhaps intuitively, can understand the psyche of his client.

The psychology behind human decisions

To know how people react, we need to embrace the notion of System 2: people use mental shortcuts to make their decisions. People do not weigh all the pros and cons, but are affected by so-called ‘peripheral’ cues.

An example: Try to sell a book and say that you still have a whole pile of them lying around. Or try to sell the same book and say that you have only five left. This says more about your own purchasing procedures than about the book, but in the second case, people are still more inclined to buy the book.

The six weapons of influence

There are six ways that successful salespeople can influence people; bundled together to six principles of influence, six types of sellers using System 1 stimulate:
  • Scarcity: people love things that are scarce or special. So, “only five products available”.
  • Reciprocity: People return a favor. Online example: “Download our free whitepaper”.
  • Social Proof: People do what others do. This is the main reason that everyone is collecting Likes on their company website.
  • Authority: People follow the advice of authorities. So ‘recommended by the editor’, or the man in the lab coat in the toothpaste commercial.
  • Liking: People listen to people they like. This is why a good salesperson asks you how your holiday was and why he finds it so interesting that you also kite surf.
  • Consistency: People try to be consistent. The wish lists of some online retailers are a nice implementation of this: by letting know that you want to have some specific thing, you are more likely to eventually buy it.
Hundreds of social scientists clearly show that the qualities of the product that you are selling only account for a small part of the real story: people buy products because they are using System 1, and are tempted by the six weapons of influence.

Effective influence

In recent years we have scantily applied influencing theories online. Offline salespersons are experts in applying these theories: hence the large differences between online and offline conversion.

A first step towards increasing the impact are implementing the principles of influence on your website. But even so, we are not nearly as good as our offline counterpart: to get there, we need to select the appropriate approach - the right influencing strategy - selected for each individual customer.

That is what it means to really know why your visitors buy your products.