NET(net), Inc.


MSFT is betting big chunks of cash on swaying customers to their hosted stuff. (Oracle, CRM, datacenter / cloud) by scottbraden

Microsoft is aggressively discounting its hosted / SaaS solutions in order to gain market share, and I suspect, to sway customers from the EA / Select / perpetual license model, onto the rental / cloud / SaaS model.

Microsoft cuts prices on BPOS, to issue refunds  – 
http://ct.zdnet.com/clicks?t=475224883-f5935ee3a0b078029592318f09b1ea8e-bf&brand=ZDNET&s=5

Microsoft seeks to lure Salesforce, Oracle users with six months free of CRM Online
Microsoft chops prices of its hosted enterprise cloud offerings

 But you’ll note that’s only on the hosted offerings.

Also of note, Microsoft’s huge new billion $ datacenters in Chicago and Dublin are now open for business. With more coming soon.
http://ct.zdnet.com/clicks?t=475224883-f5935ee3a0b078029592318f09b1ea8e-bf&brand=ZDNET&s=5

On the traditional licensing front, Microsoft just announced price increases for SQL Server.

So, clearly, MSFT is betting big chunks of cash on swaying customers to its hosted services, and as a consequence the traditional licensing models are becoming slightly less attractive.  I would advise Microsoft customers to consider the true costs and benefits of moving from a traditional licensing approach, to a model such as BPOS.  As in most things regarding Microsoft’s sales practices, there are hidden factors that may not come to light unless you ask the right questions.

-Scott Braden



Blog Series Part 1: A “Green” Data Center is More Than Meets the Eye by davidjyoung

According to the U.S. Environmental Protection Agency, “energy consumption by servers and data centers in the United States is expected to nearly double in the next five years to more than 100 billion kWh.”

This is the first in a series of blog posts that will be exploring the topic of developing, managing, and sustaining a resource efficient enterprise data center and the related infrastructure around us.  We will be exploring the responsible consumption of resources that make up the IT environment for the enterprise, examining the popular notions of the “Green” data center and going beyond the mainstream in tackling topics that have an important impact on IT related resource consumption.

We have a responsibility to be good stewards of the resources contributing to our consumption of information technology.  As global citizens we are on a collision course that is unsustainable, given the rapid consumption of energy across the planet as underdeveloped countries advance their economies and developed nations continue to grow and increase their use of automation and other energy consuming conveniences.  Our planet’s uses of energy through non-renewable fossil fuels will likely outpace our ability to find new sources if we don’t reduce and improve the efficiency of our consumption first.  In the most recent U.S. Energy Information Administration International Energy Outlook report in 2009, total world consumption of marketed energy is projected to increase by 44 percent from 2006 to 2030.  The largest projected increase in energy demand is for the non-OECD (Organization of Economic Cooperation and Development; developing countries) economies (http://www.eia.doe.gov/oiaf/ieo/world.html).

An unfortunate and vitally important consequence of this energy consumption is our output of Carbon Dioxide emissions.  Scientifically regarded as a contributor to climate change, total CO2 emissions—as calculated with all projected full measures of CO2 emission reduction programs underway or planned—are projected to increase by 17 percent from 2010 to 2020 (http://www.state.gov/g/oes/rls/rpts/car/90324.htm).

We clearly still have our work cut out for us.

According to the U.S. Environmental Protection Agency, “energy consumption by servers and data centers in the United States is expected to nearly double in the next five years to more than 100 billion kWh, costing about $7.4 billion annually”.  Similar energy cost increases are expected in Europe, Asia, and else­where. (http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf).

However with data centers, and the other information infrastructure we have in businesses and the homes to support our information and communication needs around the world, it is still a drop in the bucket compared with overall energy consumption.  What we do have is the capability to turn the information technology into solutions for energy savings.

We see this today with smart grid technology applied by the utility companies to manage home energy usage and provide bi-directional communication between the home appliances and the energy company to manage energy usage wisely and efficiently.  Applied to the data center, smart energy technology can be timed with the business cycles to reduce energy consumption on resources that don’t have to run full throttle for supporting a business application that is comparatively idle.

We will explore these ideas and more in upcoming blog posts, as we delve into improving our information to energy ratio; squeezing more information out of the energy necessary to produce it—and, perhaps taken to the extreme, spending less energy on information that has less value.  Now that’s a tricky topic!

Stay tuned for future posts.



Is an Official Classfication of Data Center Availabilty Capability Important? by davidjyoung

The Uptime Institute developed a tiered classification approach to data center site infrastructure functionality and high-availability that addressed a need for a common benchmarking standard in this area that was usually based on opinion and conjecture up to this point.  This system has been in practice now since 1995 and is often referred to by enterprises and co-location/managed hosting service providers to tout the robustness of their data center.  The tiers classify from tier-I (Basic Data Center), where there is simply a single path for power and cooling distribution, without redundant components, providing to 99.671% availability, to tier IV (Fault Tolerant), where there is site infrastructure capacity and capability to permit any planned or unplanned activity without disruption to the critical load; 99.995% site availability.

The Uptime Institute has recently asserted two things in their leadership role in this area: there is no such thing as “almost tier III” or tier II+; you are either tier III or not adhering to the strict definition.  And you must be “certified” by the Uptime Institute’s certification body or a by an Uptime Institute  trained and certified consultant to refer to your data center as adhering to one of these classification levels.

I tend to think that the Uptime Institute’s tier classification has become a de facto standard and it is a little late and disingenuous to assert control now over using this term to describe a data center. I think it is telling that Uptime Institute reports that only “two dozen” data centers have had their tier rating certified.

This type of rating should be within the purview of an international standards body, not an organization, even a not-for-profit organization, that stands to benefit financially from certification. Adhering to a strict definition of a particularly tier level, such as the difference between level II and III, does not necessarily mean the data center is not meeting strict compliance for redundancy in important other areas. There is no allocation to weighting of measures of fail-over and redundancy; it is either all or nothing.

While I am not advocating ‘shades of gray’ when it comes to building a robust infrastructure, there are many factors that come into play when evaluating the availability of the infrastructure. All of this is for naught if the application architecture, the sole purpose of having a data center to begin with, is not built for a sufficient amount of resiliency and failover. No tier IV data center is going to save a poorly architected application.