Archive

Archive for the ‘Cloud’ Category

A Solution for Private Cloud Security “Blueprint,Design Guide & Operations Guide”

"A Solution for Private Cloud Security" series of three papers on private cloud security. With increasing numbers of organizations looking to create cloud-based environments or to implement cloud technologies within their existing data centers, business and technology decision-makers are looking closely at the possibilities and practicalities that these changes involve.

There are three documents in the downloadable .zip file:

  • A Solution for Private Cloud Security – Service Blueprint
  • A Solution for Private Cloud Security – Service Design
  • A Solution for Private Cloud Security – Service Operations

This document set represents the v.95 beta of this document set.

Please write to Tom Shinder at tomsh@microsoft.com if you have recommendations or would like to review these documents for the official v1 release.

Download Page

Note: Tom is doing great job in building Reference and Security Architecture for Private Cloud

Categories: Cloud, IaaS, Microsoft, Security Tags:

Download Microsoft Private Cloud Evaluation Software

January 17, 2012 Leave a comment
Source
http://technet.microsoft.com/en-us/evalcenter/hh505660.aspx
 
System Center 2012 Release Candidate plus optional Windows Server 2008 R2 SP1 download

Download Microsoft Private Cloud

A Microsoft private cloud dramatically changes the way your business produces and consumes IT services by creating a layer of abstraction over your pooled IT resources. This allows your datacenter to offer true infrastructure service capability as well as optimally managed application services.

Microsoft private cloud solutions are built on System Center and Windows Server.

System Center 2012 Release Candidate empowers you with a common management toolset for your private and public cloud applications and services. System Center helps you confidently deliver IT as a Service for your business.

Windows Server 2008 R2 SP1 (optional download) will give you improved powerful virtualization capabilities that can transform how you deliver IT services to your end users and enable you to lay the foundation of a private cloud infrastructure.

Please Note: Many Microsoft private cloud scenarios require Windows Server 2008 R2 SP1. If you are using an older version, we highly recommend upgrading to experience the full Microsoft private cloud evaluation.
Need more information? See the product details page. Register to access technical product resources at the Microsoft Private Cloud Evaluation Resource Page.

The Microsoft private cloud evaluation includes:

System Center 2012 Release Candidate
Available in these languages: English

  • System Center 2012 Unified Installer is a utility designed to perform new, clean installations of System Center 2012 for testing and evaluation purposes only. If you want to upgrade from an existing System Center installation or choose set up options such as high availability or multi-server component installs, please refer instead to the System Center 2012 component installation guides located on the Microsoft Private Cloud Evaluation Resource Page.
    User’s Guide >>
  • System Center 2012 App Controller provides a common self-service experience across private and public clouds that can help you empower application owners to easily build, configure, deploy, and manage new services.
    System Requirements >>
  • System Center 2012 Configuration Manager provides comprehensive configuration management for the Microsoft platform that can help you empower users with the devices and applications they need to be productive while maintaining corporate compliance and control.
    System Requirements >>
  • System Center 2012 Data Protection Manager provides unified data protection for Windows servers and clients that can help you deliver scalable, manageable, and cost-effective protection and restore scenarios from disk, tape, and off premise.
    System Requirements >>
  • System Center 2012 Endpoint Protection, built on System Center Configuration Manager, provides industry-leading threat detection of malware and exploits as part of a unified infrastructure for managing client security and compliance that can help you simplify and improve endpoint protection.
    System Requirements >>
  • System Center 2012 Operations Manager provides deep application diagnostics and infrastructure monitoring that can help you ensure the predictable performance and availability of vital applications and offers a comprehensive view of your datacenter, private, and public clouds.
    System Requirements >>
  • System Center 2012 Orchestrator provides orchestration, integration, and automation of IT processes through the creation of runbooks that can help you to define and standardize best practices and improve operational efficiency.
    System Requirements >>
  • System Center 2012 Service Manager provides flexible self-service experiences and standardized datacenter processes that can help you integrate people, workflows, and knowledge across enterprise infrastructure and applications.
    System Requirements >>
  • System Center 2012 Virtual Machine Manager provides virtual machine management and services deployment with support for multi-hypervisor environments that can help you deliver a flexible and cost effective private cloud environment.
    System Requirements >>

Windows Server 2008 R2 SP1 (optional download)
Available in these languages: Chinese (Simplified), English, French, German, Japanese, Spanish

  • Windows Server 2008 R2 SP1 is designed to help you increase control, availability, and flexibility of your datacenter and desktop infrastructure while helping reduce costs.
    System Requirements >>

Transforming IT with Microsoft Private Cloud

January 8, 2012 Leave a comment

 

The definition, business value, and technology benefits of the “the cloud” have been hotly debated in recent months.  Most agree that cloud computing can accelerate innovation, reduce costs, and increase business agility in the market.  In 2012, cloud computing will transition from hype and discussion, to part of every enterprise’s reality, and IT is uniquely positioned to lead this transformation and help business reap the benefits of cloud computing. 

Join us for a virtual event designed to help you explore your cloud options. It’s your chance to interact with Microsoft experts and with IT leaders like yourself, who have been putting cloud technology to work in their own organizations. You’ll be among the first to hear the latest private cloud news from Microsoft.

Transforming IT with Microsoft Private Cloud
Approx. Start Time

Private cloud discussion with Microsoft executives: Insights and news

  • Satya Nadella, President, Server and Tools Business, Microsoft
  • Brad Anderson, Corporate Vice President, Management and Security Division, Microsoft

Register Now for the virtual event
Tuesday, January 17th
8:30 AM PST | 16:30 UTC
Hear from other senior IT professionals about how cloud computing can help you gain maximum competitive advantage with minimal risk.
Learn about Microsoft cloud offerings, including private, public, and hybrid cloud models.
Experience Microsoft private cloud solutions through the Microsoft Technology Center.

Check The Event Link

http://www.microsoft.com/business/events/en-us/PrivateCloudExec/#fbid=ZALPym445Ne

Categories: Cloud, IaaS, Microsoft

Open Beta now available for download: Service Management for the Private Cloud

October 3, 2011 1 comment

The Solution Accelerators Microsoft Operations Framework team is working on a new white paper, Service Management for the Private Cloud. Based on your participation in the IPD Beta Program, we believe you will find this white paper useful in your journey to the cloud. We hope you’ll take the time to preview and provide feedback on our new beta release.

Get the download

To download the Beta version of this Solution Accelerator, click here.

Tell us what you think

The Beta review period runs through October 7, 2011.

Download the beta guide and provide us with your feedback, especially in the areas of its usefulness, usability, and impact. Send an email with your input toMOF@microsoft.com by Friday, October 7. We need to receive your input during this period for your feedback to be included in the final release ofService Management for the Private Cloud.

Your input helps to make the guides we publish as helpful and useful as possible. We look forward to hearing from you!

To submit feedback through email, please use the following procedure:

1. Use the Comment feature in Microsoft Word to insert your feedback in the form of comments. Please only submit your feedback in the documents using this feature in Word.

2. If your feedback concerns something other than the content of the document, please include your feedback directly in your email message.

3. Email the document with your feedback to MOF@microsoft.com.

Kinds of feedback

We would especially appreciate feedback in the following areas:

  • Usefulness
    Is the technical depth of this white paper sufficient for the topics covered? What portions of the white paper are the most useful to your organization?
  • Usability
    Is the structure or flow of this white paper effective? Is the information presented in a clear and logical manner? Can you easily find key content?
  • Impact
    Do you anticipate that this white paper will save you time and accelerate deployment of Microsoft management products in your organization? Has this white paper had a positive influence on your opinion of the Microsoft technologies it addresses? Do you think this white paper helped you apply service management principles to make your private cloud more successful?

Benefits for participation

  • You get an early look at this high-demand content.
  • You will be listed as a contributor if you provide feedback we integrate into the final version of the guide.

Availability

The final release is expected to be available from the Microsoft Download Center in November 2011. You will receive an email notification when it’s available for download.

Learn More

Visit the MOF home page on TechNet: www.microsoft.com/MOF

For the latest Solution Accelerators, visit the home page: www.microsoft.com/SolutionAccelerators

Thank you for your interest in the development of the Service Management for the Private Cloud white paper. We look forward to receiving your feedback!

Sincerely,

Solution Accelerators Infrastructure Planning and Design Team

Microsoft Corporation

Follow Solution Accelerators on Twitter to get the latest tips and updates: @MSSolutionAccel

Categories: Cloud, Hyper-V R2, IaaS Tags:

System Center Virtual Machine Manager 2012 Release Candidate is here

September 12, 2011 Leave a comment

So it is here…Get Ready for the new Wave

http://www.microsoft.com/en-us/server-cloud/system-center/virtual-machine-manager-2012.aspx

System Center 2012 cloud and datacenter management solutions empower you with a common management tool set for your private and public cloud applications and services. System Center helps you confidently deliver IT as a Service for your business. Virtual Machine Manager 2012 Release Candidate (RC) is key to delivering the promise of the System Center 2012 release wave. Virtual Machine Manager 2012 enables you to:

Deliver flexible and cost effective Infrastructure as a Service (IaaS). You can pool and dynamically allocate virtualized datacenter resources (compute, network, and storage) enabling self-service infrastructure experience for your business, with flexible role based delegation and access control.

Apply cloud principles to provisioning and servicing your datacenter applications with techniques like service modeling, service configuration and image based management. You can also state-separate your applications and services from the underlying infrastructure using server application virtualization. This results in a “service centric” approach to management where you manage the application or service lifecycle and not just datacenter infrastructure or virtual machines.

Optimize your existing investments by managing multi-hypervisor environments, including Hyper-V, Xen and VMware.

Dynamic optimization of your datacenter resources based on workload demands, while ensuring reliable service delivery with features like high availability.

Best-of-breed virtualization-management for Microsoft workloads like Exchange and Sharepoint.

Benefits

Virtual Machine Manager 2012 Release Candidate is a core component of the System Center 2012 release wave and offers you the following benefits:

Build your Private Cloud today; provision flexible, agile and cost effective IaaS while maintaining your quality of service (QoS) commitments to the business.

Manage heterogeneous virtual environments using a single tool, thus optimizing your existing datacenter investments.

Optimize your existing applications for private cloud deployment without requiring you to rewrite them from scratch.

Dramatically simplify application provisioning and servicing thus saving operational effort and expense.

Unlock application mobility between your cloud environments as appropriate to your business needs.

So you can start testing and I would recommend to take a look at VMM 2012 Survival Guide

Microsoft TechNet Library content:

System Center Virtual Machine Manager 2012  (on the Web)

Download the official VMM 2012 Beta content  (may not be as current as the “live” Web version in TechNet)

Videos

System Center VMM 2012 Overview 

Microsoft Virtualization for VMware Professionals  This is a link to the whole series. For sections specifically related to VMM 2012, see:

Virtualization Jump Start (07): System Center Virtual Machine Manager 2012 

Virtualization Jump Start (08): Private Cloud Solutions, Architecture & VMM Self-Service Portal 2.0 

90 Seconds to the Cloud: Virtual Machine Manager 2012 Beta with Kenon Owens 

MMS 2011 Day 1 Keynote 

Creating a Stand-Alone Virtual Machine with a Blank VHD in VMM 2012 

Best of MMS Belgium 2011 video series  (Includes “Virtual Machine Manager 2012: Technical Overview”, “Managing Your Fabric With System Center Virtual Machine Manager 2012”)

Blogs

System Center Virtual Machine Manager blog  (from the Microsoft product team)

Microsoft Server Application Virtualization Blog (from the Microsoft product team) 

SCVMM 2012 content on Hyper-V.nu blog 

SCVMM 2012 content on Virtualization and some coffee blog 

Other

System Center Virtual Machine Manager 2012: VMM Gets Major Upgrade   (TechNet Magazine)

Troubleshooting

Post your question to the VMM forums

Troubleshooting OS Deployment of Hyper-V Through VMM 2012    (Blog post)

 

 

System Center Virtual Machine Manager Self-Service Portal 2.0 SP1– Now Available for Download!

Using System Center Virtual Machine Manager Self-Service Portal 2.0 SP1, you can respond more effectively—and at a lower cost—to the rapidly changing needs of your organization. Built on Windows Server® 2008 R2, Hyper-V™ technology, and System Center Virtual Machine Manager, VMMSSP enables you to offer infrastructure as a service in your organization.

Download VMMSSP

System Center Virtual Machine Manager Self-Service Portal 2.0 SP1 is a partner-extensible toolkit that enables customers to dynamically pool, allocate, and manage their datacenter resources, to offer infrastructure as a service for their organization. The self-service portal is a free Solution Accelerator, and is fully supported by Microsoft. In addition to providing users the ability to import virtual machines created outside the self-service portal, this new version is now localized in Japanese, simplified Chinese, and traditional Chinese.

Please send any questions or comments to the VMMSSP team at sspfeedback@microsoft.com. We will respond within two business days.

Please download VMMSSP here, and visit www.microsoft.com\SSP for more information, including a free Datasheet and FAQ. You can also follow System Center and Solution Accelerators on Twitter to keep up-to-date on new releases and features.

VM Create from VMM SSP v2.0 SP1 (beta) fails with boot from ISO

Vote As Helpful A very interesting problem that I saw before in my testing

http://social.technet.microsoft.com/Forums/en-US/scvmmssp2/thread/4331b0fb-512a-4998-86ad-52dcb0ac711e/

 

If I create the VM from VMM 2008 R2 it "fails" on "Install VM Components" because there is no OS yet. But I can then start VM from VMM and boots from ISO just fine.

From SSP it fails with:

Parameters cannot be specified for the guest operating system because the selected template was created as a non-customizable template (-NoCustomization parameter). (Error ID: 730) Please retry the operation without specifying guest operating system parameters.

Have tried with/without a synthetic network adapter and have also tried a custom action without the -Owner $SubmittedBy -ComputerName $vmName -FullName $submittedBy from the New-VM entries in the createvm task.

 

Santos from MS give one shoot solution

The Action xml segment which does not have " -Owner $SubmittedBy -ComputerName $vmName -FullName $submittedBy" needs to be associated with Service role in which virtual machines are being created and then select that service role during create virtual machine wizard. This should resolve your issue.  You can also refer to "Associating an Action XML Segment with a Service Role" in VMM08R2_VMMSSPExtensibilityGuide.

Categories: Cloud, IaaS, Virtualization, VMM

Cloud Computing.. Another Perspective

This is a cool video for cloud computing.. Enjoy it

 

Categories: Cloud Tags:

System Center Virtual Machine Manager Self-Service Portal 2.0 SP1 Beta

Now it is on Connect https://connect.microsoft.com/site1044/program5055

 

You can grab your Now

What’s new with VMMSSP 2.0 SP1?

 

Import virtual machines: Allows DCIT (Datacenter) administrators to re-import virtual machines that were removed from the self-service portal and also import virtual machines created outside the portal but managed by VMM.

 

Expire virtual machines: This feature provides the user the ability to set an expiration date for the virtual machines that are being created or imported so that the virtual machines auto-delete after the set date. This feature also provides users the flexibility (through role-based access) to set or change the expiration date for the virtual machine.

Notify administrators: This feature provides functionality to notify BUIT(Business Unit) or DCIT(Datacenter) administrators about various events in the system (for example, Submit request, Approve request,Expire virtual machine, and so on) via email through SQL server mail integration.

Move infrastructure between business units: This feature allows DCIT(Datacenter) administrators to move an infrastructure from one business unit to another when the system is in Maintenance Mode.

The VMMSSP 2.0 SP1 Beta review program is now open.

Tell us what you think! Download the beta and provide us with your feedback, especially in the areas of its usefulness, usability, and impact. Send an email with your input to sspfeedback@microsoft.com.

Your input helps to make the guides we publish as helpful and useful as possible. We look forward to hearing from you!

 

Kinds of Feedback:

We would especially appreciate feedback in the following areas:

· Usefulness: What portions of the tool are the most useful to your organization?

· Usability: Is the solution is easy to setup and neviagate through?

· Impact: Do you anticipate that this tool will save you time in your organization? Has this tool had a positive influence on your opinion of the Microsoft technologies it addresses?

Private Cloud Concepts

January 1, 2011 Leave a comment

Adam Fazio wrote a very interesting blog about Private Cloud Concepts… I think it is very important one to be shared, hereunder his post

 

This post is part of a series which outlines the Context, Principles, and Concepts which formulate the basis of a holistic Private Cloud approach. Each post gives a framing for the next so they are best read in order.

The following concepts are abstractions or strategies that support the principles and facilitate the composition of a (Microsoft) Private Cloud. They are guided by and directly support one or more of the principles.

Holistic Approach to Availability

In order to achieve the perception of continuous availability, a holistic approach must be taken in the way availability is achieved. Traditionally, availability has been the primary measure of the success of IT service delivery and is defined through service level targets that measure the percentage of uptime (e. g. 99.99 percent availability). However, defining the service delivery success solely through availability targets creates the false perception of “the more nines the better” and does not account for how much availability the consumers actually need.

There are two fundamental assumptions behind using availability as the measure of success. First, that any service outage will be significant enough in length that the consumer will be aware of it and second, that there will be a significant negative impact to the business every time there is an outage. It is also a reasonable assumption that the longer it takes to restore the service, greater the impact on the business.

There are two main factors that affect availability. First is reliability which is measured by Mean-Time-Between-Failures (MTBF). This measures the time between service outages. Second is resiliency which is measured by Mean-Time-to-Restore-Service (MTRS). MTRS measures the total elapsed time from the start of a service outage to the time the service is restored. The fact that human intervention is normally required to detect and respond to incidents limits how much MTRS can be reduced. Therefore organizations have traditionally focused on MTBF to achieve availability targets. Achieving higher availability through greater reliability requires increased investment in redundant hardware and an exponential increase in the cost of implementing and maintaining this hardware.

In a traditional data center, the MTRS may average well over an hour while a dynamic data center can recover from failures in a matter of seconds. Combined with the automation of detection and response to failure and warn states within the infrastructure, this can reduce the MTRS (from the perspective of IaaS) dramatically. Thus a significant increase in resiliency makes the reliability factor much less important. Availability (minutes of uptime/year) is no longer the primary measure of the success of IT service delivery.  The perception of availability and the business impact of unavailability become the measures of success.

Using the holistic approach, higher levels of availability and resiliency are achieved by replacing the traditional model of physical redundancy with software tools. 

image

MTRS and MTBF in Traditional Data Center versus a Dynamic Data Center

Homogenization of Physical Hardware

Homogenization of the physical hardware is a key concept for driving predictability. The underlying infrastructure must provide a consistent experience to the hosted workloads in order to achieve predictability. This consistency is attained through the homogenization of the underlying servers, network, and storage.

Abstraction of services from the hardware layer through virtualization makes “server SKU differentiation” a logical rather than a physical construct. This eliminates the need for differentiation at the physical server level. Greater homogenization of compute components results in a greater reduction in variability. This reduction in variability increases the predictability of the infrastructure which, in turn, improves service quality.

The goal is to ultimately homogenize the compute, storage, and network layers to the point where there is no differentiation between servers. In other words, every server has the same processor and memory; every server connects to the same storage resources and to the same networks. This means that any virtualized service runs and functions identically on any physical server and so it can be relocated from a failing or failed physical server to another physical server seamlessly without any change in service behavior.

It is understood that full homogenization of the physical infrastructure may not be feasible. While it is recommended that homogenization be the strategy, where this is not possible, the compute components should at least be standardized to the fullest extent possible.

Resource Pooling

Leveraging a shared pool of compute resources is key. This Resource Pool is a collection of shared resources composed of compute, storage, and network that create the fabric that hosts virtualized workloads. Subsets of these resources are allocated to the customers as needed and conversely, returned to the pool when they are not needed. Ideally, the Resource Pool should be as homogenized and standardized as possible.

ResourcePool

Virtualized Infrastructure

Virtualization is the abstraction of hardware components into logical entities. Although virtualization occurs differently in each infrastructure component (server, network, and storage), the benefits are generally the same including lesser or no downtime during resource management tasks, enhanced portability, simplified management of resources, and the ability to share resources. Virtualization is the catalyst to the other concepts, such as Elastic Infrastructure, Partitioning of Shared Resources, and Resource Pooling. The virtualization of infrastructure components needs to be seamlessly integrated to provide a fluid infrastructure that is capable of growing and shrinking on demand, and provides global or partitioned resource pools of each component.

Fabric Management

Fabric is the term applied to the collection of Resource Pools. Fabric Management is a level of abstraction above virtualization; in the same way that virtualization abstracts physical hardware, Fabric Management abstracts services from specific hypervisors and network switches. Fabric Management can be thought of as an orchestration engine, which is responsible for managing the life cycle of a consumer’s workload (one or more VMs which collectively deliver a service). Fabric Management responds to service requests (e.g. to provision a new VM or set of VMs), Systems Management events (e.g. moving/restarting VMs as a result of a warning or failure), and Service Management policies (e.g. adding another VM to a consumer workload in response to load).

Traditionally, servers, network and storage have been managed separately, often on a project-by-project basis. To ensure resiliency we must be able to automatically detect if a hardware component is operating at a diminished capacity or has failed. This requires an understanding of all of the hardware components that work together to deliver a service, and the interrelationships between these components. Fabric Management provides this understanding of interrelationships to determine which services are impacted by a component failure. This enables the Fabric Management system to determine if an automated response action is needed to prevent an outage, or to quickly restore a failed service onto another host within the fabric.

From a provider’s point of view, the Fabric Management system is key in determining the amount of Reserve Capacity available and the health of existing fabric resources. This also ensures that services are meeting the defined service levels required by the consumer.

Fabric

Elastic Infrastructure

The concept of an elastic infrastructure enables the perception of infinite capacity. An elastic infrastructure allows resources to be allocated on demand and more importantly, returned to the Resource Pool when no longer needed. The ability to scale down when capacity is no longer needed is often overlooked or undervalued, resulting in server sprawl and lack of optimization of resource usage. It is important to use consumption-based pricing to incent consumers to be responsible in their resource usage. Automated or customer request based triggers determine when compute resources are allocated or reclaimed.

Achieving an elastic infrastructure requires close alignment between IT and the business, as peak usage and growth rate patterns need to be well understood and planned for as part of Capacity Management.

Partitioning of Shared Resources

Sharing resources to optimize usage is a key principle, however, it is also important to understand when these shared resources need to be partitioned. While a fully shared infrastructure may provide the greatest optimization of cost and agility, there may be regulatory requirements, business drivers, or issues of multi-tenancy that require various levels of resource partitioning. Partitioning strategies can occur at many layers, such as physical isolation or network partitioning. Much like redundancy, the lower in the stack this isolation occurs, the more expensive it is. Additional hardware and Reserve Capacity may be needed for partitioning strategies such as separation of resource pools. Ultimately, the business will need to balance the risks and costs associated with partitioning strategies and the infrastructure will need the capability of providing a secure method of isolating the infrastructure and network traffic while still benefiting from the optimization of shared resources.

Resource Decay

Treating infrastructure resources as a single Resource Pool allows the infrastructure to experience small hardware failures without significant impact on the overall capacity. Traditionally, hardware is serviced using an incident model, where the hardware is fixed or replaced as soon as there is a failure. By leveraging the concept of a Resource Pool, hardware can be serviced using a maintenance model. A percentage of the Resource Pool can fail because of “decay” before services are impacted and an incident occurs. Failed resources are replaced on a regular maintenance schedule or when the Resource Pool reaches a certain threshold of decay instead of a server-by-server replacement.

The Decay Model requires the provider to determine the amount of “decay” they are willing to accept before infrastructure components are replaced. This allows for a more predictable maintenance cycle and reduces the costs associated with urgent component replacement.

Service Classification

Service classification is an important concept for driving predictability and incenting consumer behavior. Each service class will be defined in the provider’s service catalog, describing service levels for availability, resiliency, reliability, performance, and cost. Each service must meet pre-defined requirements for its class. These eligibility requirements reflect the differences in cost when resiliency is handled by the application versus when resiliency is provided by the infrastructure.

The classification allows consumers to select the service they consume at a price and the quality point that is appropriate for their requirements. The classification also allows for the provider to adopt a standardized approach to delivering a service which reduces complexity and improves predictability, thereby resulting in a higher level of service delivery.

Cost Transparency

Cost transparency is a fundamental concept for taking a service provider’s approach to delivering infrastructure. In a traditional data center, it may not be possible to determine what percentage of a shared resource, such as infrastructure, is consumed by a particular service. This makes benchmarking services against the market an impossible task. By defining the cost of infrastructure through service classification and consumption modeling, a more accurate picture of the true cost of utilizing shared resources can be gained. This allows the business to make fair comparisons of internal services to market offerings and enables informed investment decisions.

Cost transparency also incents service owners to think about service retirement. In a traditional data center, services may fall out of use but often there is no consideration on how to retire an unused service. The cost of ongoing support and maintenance for an under-utilized service may be hidden in the cost model of the data center. Monthly consumption costs for each service can be provided to the business, incenting service owners to retire unused services and reduce their cost.

Consumption Based Pricing

This is the concept of paying for what you use as opposed to a fixed cost irrespective of the amount consumed. In a traditional pricing model, the consumer’s cost is based on flat costs derived from the capital cost of hardware and software and expenses to operate the service. In this model, services may be over or underpriced based on actual usage. In a consumption-based pricing model, the consumer’s cost reflects their usage more accurately.

The unit of consumption is defined in the service class and should reflect, as accurately as possible, the true cost of consuming infrastructure services, the amount of Reserve Capacity needed to ensure continuous availability, and the user behaviors that are being incented.

(Thanks to authors Kevin Sangwell, Laudon Williams & Monte Whitbeck (Microsoft) for allowing me to revise and share)