NET(net), Inc.


IT Sustainability by szolman

Major technology providers are beating street estimates lately as technology spending is seemingly on the mend.

As spending picks up and market conditions change, will Clients and suppliers be able to develop and maintain sustainable agreements that offer best in class pricing, flexible terms and conditions, and serve as a guide on how to do good, long term business together?

The major tenants of IT Sustainability are timeliness and they are published for anyone to see right here http://tinyurl.com/yd7ckf5.

If you are making an investment in IT, find out how you can get maximum value and keep it by clicking the link above.

The Sustainability of your IT investments and relationship is our mission.

Full Article Availble at:
http://www.netnetweb.com/index.php/blog/entry/is_your_deal_sustainable/



Third Party Providers by jigordon
January 4, 2010, 9:32 am
Filed under: current events, law, maintenance, Outsourcing, Uncategorized

Happy New Year!

I saw an interesting article today that high-tech vehicles were posing problems to some mechanics.  The mechanics claim that they can’t afford the thousands of dollars that are necessary for them to obtain the specialized diagnostic tools for each auto manufacturer.  The manufacturers are claiming that they’re trying to protect their intellectual property.

Sound familiar?  Yup, it’s exactly like the issues Frank Scavo and Ray Wang have written about with regards to third-party software providers being blocked from performing various maintenance/implementation tasks by the contracts and software licenses and services agreements of certain primary vendors.

On the automotive side, it’s apparently gotten to be such an issue that there’s a congressional bill called the Motor Vehicle Owners Right to Repair Act of 2009.  The stated purpose of this Bill is to “protect the rights of consumers to diagnose, service, maintain, and repair their motor vehicles”.  What’s really interesting are the Bill’s findings, among which say that:

  • Motor vehicle owners are entitled to choose which service provider will diagnose, service, maintain, or repair their motor vehicles.
  • Promoting competition in price and quality… will benefit consumers.
  • Only service technician with the necessary tools and information can access the computers to perform diagnosis, service, maintenance and repair…

And the requirements of the Bill, specifically:

  • Duty to Make Tools Available:  The manufacturer of a motor vehicle sold, leases or otherwise introduced into commerce in the United States must offer for sale to the motor vehicle owner and to all service providers on a reasonable and non-discriminatory basis, any tool for the diagnosis, service, maintenance, or repair of a motor vehicle, and provide all information that enables aftermarket tool companies to manufacture tools with the same functional characteristics as those tools made available by the manufacturers to authorized dealers.
  • Replacement Equipment: The manufacturer of a motor vehicle sold, leased, or otherwise introduced into commerce in the United States must offer for sale to motor vehicle owners, and to all service providers on reasonable and non-discriminatory terms, all equipment for diagnosis, service, maintenance, or repair of a motor vehicle.

The only thing the Bill protects for the manufacturer are things that are actual trade secrets.

Wow.  Of course, there are a LOT of people (and more specifically, a lot of trade association and advocacy groups) behind this Bill.

Could you imagine what would happen if this passes and someone realizes that software in cars isn’t that dissimilar to plain old enterprise software?  If only there was a trade association group for buyers of enterprise software apps.  😉

But let’s talk about the other side of the issue for a moment.  Do consumers have a right to have third-party companies provide service?  A right?  No.  I don’t think there’s a right to be able to have third-party providers.  [Keep in mind, when we’re talking about rights, we’re talking about things equal to “life, liberty and the pursuit of happiness…”.]

Absent a right, should third-party providers still be allowed/encouraged?  I’m really torn on this.  On one hand, I’m all in favor of things that inspire commerce.  I like behaviors that create business, allow more people to work… and of course, things that drive down costs and dissipate apparent monopolies.  On the other hand, an individual or organization who creates something should be able to protect their idea/invention and not have to give up the secret sauce simply so that other people can benefit.  But there seems to be a line somewhere that once you cross it should allow for third-party companies to fill available niches.  Maybe it’s where the original vendor is no longer able to provide a quality-level of service.  Maybe it’s a situation where the original vendor is charging exorbitant rates.  I’m not sure.

Anyone have a solution?



MSFT is betting big chunks of cash on swaying customers to their hosted stuff. (Oracle, CRM, datacenter / cloud) by scottbraden

Microsoft is aggressively discounting its hosted / SaaS solutions in order to gain market share, and I suspect, to sway customers from the EA / Select / perpetual license model, onto the rental / cloud / SaaS model.

Microsoft cuts prices on BPOS, to issue refunds  – 
http://ct.zdnet.com/clicks?t=475224883-f5935ee3a0b078029592318f09b1ea8e-bf&brand=ZDNET&s=5

Microsoft seeks to lure Salesforce, Oracle users with six months free of CRM Online
Microsoft chops prices of its hosted enterprise cloud offerings

 But you’ll note that’s only on the hosted offerings.

Also of note, Microsoft’s huge new billion $ datacenters in Chicago and Dublin are now open for business. With more coming soon.
http://ct.zdnet.com/clicks?t=475224883-f5935ee3a0b078029592318f09b1ea8e-bf&brand=ZDNET&s=5

On the traditional licensing front, Microsoft just announced price increases for SQL Server.

So, clearly, MSFT is betting big chunks of cash on swaying customers to its hosted services, and as a consequence the traditional licensing models are becoming slightly less attractive.  I would advise Microsoft customers to consider the true costs and benefits of moving from a traditional licensing approach, to a model such as BPOS.  As in most things regarding Microsoft’s sales practices, there are hidden factors that may not come to light unless you ask the right questions.

-Scott Braden



Blog Series Part 1: A “Green” Data Center is More Than Meets the Eye by davidjyoung

According to the U.S. Environmental Protection Agency, “energy consumption by servers and data centers in the United States is expected to nearly double in the next five years to more than 100 billion kWh.”

This is the first in a series of blog posts that will be exploring the topic of developing, managing, and sustaining a resource efficient enterprise data center and the related infrastructure around us.  We will be exploring the responsible consumption of resources that make up the IT environment for the enterprise, examining the popular notions of the “Green” data center and going beyond the mainstream in tackling topics that have an important impact on IT related resource consumption.

We have a responsibility to be good stewards of the resources contributing to our consumption of information technology.  As global citizens we are on a collision course that is unsustainable, given the rapid consumption of energy across the planet as underdeveloped countries advance their economies and developed nations continue to grow and increase their use of automation and other energy consuming conveniences.  Our planet’s uses of energy through non-renewable fossil fuels will likely outpace our ability to find new sources if we don’t reduce and improve the efficiency of our consumption first.  In the most recent U.S. Energy Information Administration International Energy Outlook report in 2009, total world consumption of marketed energy is projected to increase by 44 percent from 2006 to 2030.  The largest projected increase in energy demand is for the non-OECD (Organization of Economic Cooperation and Development; developing countries) economies (http://www.eia.doe.gov/oiaf/ieo/world.html).

An unfortunate and vitally important consequence of this energy consumption is our output of Carbon Dioxide emissions.  Scientifically regarded as a contributor to climate change, total CO2 emissions—as calculated with all projected full measures of CO2 emission reduction programs underway or planned—are projected to increase by 17 percent from 2010 to 2020 (http://www.state.gov/g/oes/rls/rpts/car/90324.htm).

We clearly still have our work cut out for us.

According to the U.S. Environmental Protection Agency, “energy consumption by servers and data centers in the United States is expected to nearly double in the next five years to more than 100 billion kWh, costing about $7.4 billion annually”.  Similar energy cost increases are expected in Europe, Asia, and else­where. (http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf).

However with data centers, and the other information infrastructure we have in businesses and the homes to support our information and communication needs around the world, it is still a drop in the bucket compared with overall energy consumption.  What we do have is the capability to turn the information technology into solutions for energy savings.

We see this today with smart grid technology applied by the utility companies to manage home energy usage and provide bi-directional communication between the home appliances and the energy company to manage energy usage wisely and efficiently.  Applied to the data center, smart energy technology can be timed with the business cycles to reduce energy consumption on resources that don’t have to run full throttle for supporting a business application that is comparatively idle.

We will explore these ideas and more in upcoming blog posts, as we delve into improving our information to energy ratio; squeezing more information out of the energy necessary to produce it—and, perhaps taken to the extreme, spending less energy on information that has less value.  Now that’s a tricky topic!

Stay tuned for future posts.



Why RFP’s Suck for Both Sides by jigordon
July 14, 2009, 9:32 pm
Filed under: negotiation, Outsourcing, process

RFPs suck for both sides of the equation. Bidders hate responding to them and the requesting organization hates reviewing them.

Why?

Well, because they’re time consuming… and each side believes that the other side is: 1) Only spending enough time to barely glean the financials off the top; 2) Inserting default language from prior RFPs which may or may not have relevance to the current project; and 3) Only doing this to appease some misguided sense of a “strategic sourcing process”.  These assumptions are all 100% true:

  1. Using RFPs correctly can be a valuable part of a strategic sourcing process.  But generally speaking, they’re hastily assembled, from a template, and sent out without consideration as to who will get them.
  2. Responses almost always arrive at the last possible moment – not because they’re the product of countless hours of taxing effort and meticulous drafting – but because they’re tossed in a drawer and forgotten until the last possible moment.
  3. Reviews ARE hastily done… with receiving “teams” designated to score RFPs by section but having no real training as to how to do it properly (usually because they didn’t spend enough time working on a requirements document for the project to begin with).

How do I know this?  Well, it’s easy, really.  After a decade of using them, I long ago learned to monitor the Word document’s properties for the RFP itself.  It’s where I’ve asked responses to come electronically, so I can see EXACTLY how much time has been spent editing the document.  Do I really think that it only takes an hour of editing to respond to one?  Ha.  Only if it’s a copy-paste job.

But I’d wondered what bidders were doing to monitor our review.  Now I know.  And I think it’s an excellent smack-down.  Reviewers SHOULD be held accountable for the efforts they ask others to expend on their behalf.  As time-consuming processes go, you should at least be willing to put in the effort to review something that you’ve asked someone else to create.  Oh, and by all means should you have LIMITED the number of potential respondents long before sending out the document package.

By the way, all of the food, drink and alcohol provided by these various agencies sure smacks of impropriety to me.  NEVER send a reviewing organization ANYTHING until after the deal has been signed… and then you’d better comply with that organization’s gift policy or you should expect to get it back.

 



Delivering Perfection by jigordon
July 7, 2009, 9:32 am
Filed under: Outsourcing, process, source code, Uncategorized

In thousands of meetings over the years, I’ve been privvy to a very common conversation.  It’s a discussion of deliverables – what is needed, what is wanted, how much money is available to pay for the needs/wants, who can create the best solution, etcetera.  Regardless of the actual nature of the deliverable, the basics are always the same:  We want what we want, when we want it, at the least total cost.  The end result, however, varies widely on a huge number of factors.  One of them is the quality of the “spec” – the document describing what’s being created; and another is the quality of the group performing the work.

The deliverable is never perfect.  At some point in the process, either the vendor makes errors or the buyer doesn’t adequately describe what they want (or consider all of the various contingencies).  The net result is payment for something that doesn’t do what you hoped it would do – or going over budget for the fix.  So how do you deliver perfection?

Well, the folks who write the software that runs NASA’s Space Shuttle have it about right.  It’s a four-part process that keeps their code running virtually bug free for the last 20 years, and like the Five Fundamental Skills for Effective Negotiation, it’s not (pardon the pun) rocket science:

  1. The product is only as good as the plan for the product.
  2. The best teamwork is a healthy rivalry.
  3. The database is the software base.
  4. Don’t just fix mistakes – fix whatever permitted the mistake in the first place.

This is a fascinating story about a group of 260 people working normal hours and achieving extraordinary results (the last 11 versions of the software have only had a total of 17 errors).  Equaly important from a stats perspective are the specs.  For the current application (420,000 lines of code – for comparison’s sake: WindowsXP has 40,000,000 and MacOSX has 86,000,000), the current spec is 40,000 pages long.

Now, granted, many of the projects your teams are working on aren’t operating systems.  But how many of you have seen a spec document that’s even more than 100 pages?  How about 50?  Very few.  In fact, I am used to seeing spec documents of less than 5 pages – 10 at most.  It’s no wonder that there are errors.

I also don’t believe that many of us will be effective in getting our development teams to change, either.  But if they only got a little better, the cost savings would be immense.  So share the article with them from a human interest perspective (ie: don’t push an agenda).  The worst that happens is you start a dialog.



Is an Official Classfication of Data Center Availabilty Capability Important? by davidjyoung

The Uptime Institute developed a tiered classification approach to data center site infrastructure functionality and high-availability that addressed a need for a common benchmarking standard in this area that was usually based on opinion and conjecture up to this point.  This system has been in practice now since 1995 and is often referred to by enterprises and co-location/managed hosting service providers to tout the robustness of their data center.  The tiers classify from tier-I (Basic Data Center), where there is simply a single path for power and cooling distribution, without redundant components, providing to 99.671% availability, to tier IV (Fault Tolerant), where there is site infrastructure capacity and capability to permit any planned or unplanned activity without disruption to the critical load; 99.995% site availability.

The Uptime Institute has recently asserted two things in their leadership role in this area: there is no such thing as “almost tier III” or tier II+; you are either tier III or not adhering to the strict definition.  And you must be “certified” by the Uptime Institute’s certification body or a by an Uptime Institute  trained and certified consultant to refer to your data center as adhering to one of these classification levels.

I tend to think that the Uptime Institute’s tier classification has become a de facto standard and it is a little late and disingenuous to assert control now over using this term to describe a data center. I think it is telling that Uptime Institute reports that only “two dozen” data centers have had their tier rating certified.

This type of rating should be within the purview of an international standards body, not an organization, even a not-for-profit organization, that stands to benefit financially from certification. Adhering to a strict definition of a particularly tier level, such as the difference between level II and III, does not necessarily mean the data center is not meeting strict compliance for redundancy in important other areas. There is no allocation to weighting of measures of fail-over and redundancy; it is either all or nothing.

While I am not advocating ‘shades of gray’ when it comes to building a robust infrastructure, there are many factors that come into play when evaluating the availability of the infrastructure. All of this is for naught if the application architecture, the sole purpose of having a data center to begin with, is not built for a sufficient amount of resiliency and failover. No tier IV data center is going to save a poorly architected application.