Infrastructure, whether in the cloud or on-premises, sits at the heart of your technology stack. The fundamental structure and configuration of your infrastructure impacts what tech you can integrate, what transformative solutions you have access to and how you can innovate the way your organisation functions. It’s essential to implement a cloud infrastructure for AI if you want to dive in.

When organisations look to implement cutting edge technologies like AI, they often run into the roadblock of their own infrastructure. Here, we’ll explore the best approach to your cloud infrastructure and how it can lay the foundation for transformative innovation.

Where does cloud infrastructure fit in wider digital transformation initiatives?

Digital transformation today is a key pillar of all technical roadmaps and is a continuous process of change and optimisation of your operations to support your business strategy. Whether organisations are aiming to increase operational efficiency, improve customer experience, adopt AI, or unlock new revenue opportunities, the cloud often serves as the backbone for change.

While sometimes overlooked, cloud infrastructure (like Microsoft Azure) plays a central role in enabling these efforts. According to Microsoft’s State of AI Infrastructure report, challenges with infrastructure are a common roadblock in implementing AI tools. According to the report, 56% of organisations don’t have the proper infrastructure to support desired AI workloads and 41% cite infrastructure design and implementation as the area they need most support with.

Cloud Infrastructure for AI body

Microsoft Azure provides a comprehensive platform that supports both infrastructure modernisation and innovation. For many organisations, Azure offers the scalability, security and integration capabilities needed to modernise legacy systems, transition to more agile operating models and deliver services at scale.

How are organisations approaching the cloud?

We’re seeing a noticeable shift in how businesses approach cloud adoption. Early approaches often focused on lifting and shifting workloads to reduce costs or improve reliability. Today, the conversation is more strategic where Azure Landing Zones are configured to support organisations with their priorities, fiscal operations (FinOps), governance, security, AI and ensuring the cloud provides value for money in an increasingly cost-conscious economy.

Hybrid and multi-cloud models are common, as enterprises seek flexibility and avoid vendor lock-in. Some begin with targeted workloads such as development environments, DRaaS or DaaS; before expanding into data platforms, enterprise applications or AI workloads.

What stands out in successful approaches is clarity: clear goals, clear governance, strong security frameworks and efficient operations.

Common pitfalls in cloud infrastructure for AI

While the benefits of cloud are well understood, there are also recurring challenges that can undermine success in cloud infrastructure for AI:

Cloud Infrastructure for AI body

Insufficient planning

Moving to the cloud without a well-defined strategy often results in higher costs, technical debt, data loss and unnecessary disruption/impact.

Lift-and-shift without modernisation

Replicating legacy architectures in the cloud can cause unexpected challenges like limited performance, increased costs, conceal existing security gaps  and limit future modernisation opportunities.

Lack of governance

Without controls around cost management, identity, regulation, security, and application services, cloud environments can quickly become “runaway trains” – becoming difficult to manage, resulting in disrupted business operations and higher running costs.

Skills gaps

Cloud success requires new capabilities both technical and operational – that many teams are still developing.

These pitfalls are avoidable with the right planning, expertise, and support.

Key considerations for businesses moving to the cloud

A successful cloud journey requires more than technology decisions. It involves aligning people, process, and priorities.

Here are a few guiding principles:

Start with a clear strategy

Define objectives in business requirements, security objectives, and financial constraints, which will help define clear technological solutions for those objectives.

Modernise, not just migrate

Take the opportunity to rethink how applications and services are designed and delivered, which may offer greater scalability or business continuity.

Build internal capability

Invest in training, FinOps and change management to bring teams along the journey.

Prioritise security and compliance

Establish frameworks early to ensure data governance and regulatory alignment.

A structured, phased approach typically yields better outcomes than migrations without clear direction.

Microsoft Azure’s role in supporting AI adoption

As interest in AI continues to grow, Azure provides a practical, secure path to integration. Through services like Azure OpenAI, Azure Machine Learning, and Cognitive Services, organisations can begin incorporating AI into their operations, whether to enhance customer facing services, granular data analysis or automate internal workflows, all which can be done at scale.

Importantly, Azure supports responsible AI development, with governance tools and model transparency built in. For businesses exploring AI, the platform offers both the infrastructure and the tools to pilot, scale, and manage AI initiatives within the boundaries of corporate and regulatory requirements.

Building a Future-Ready Financial Services Cloud: Novuna's Azure Adoption

Building a Future-Ready Financial Services Cloud: Novuna’s Azure Adoption – check it out here.

Taking the next steps and getting started

For organisations looking to move forward with Microsoft Azure and the broader cloud journey, a few initial steps can make a significant difference:

  • Conduct a cloud readiness or application assessment
  • Engage with a trusted partner to accelerate planning and execution
  • Leverage Microsoft programs and frameworks to support the transition
  • Deliver outstanding solutions and adopt cloud technology.

The secret to managing your Hybrid Cloud? Microsoft Azure Arc.

It’s no secret that most businesses have adopted a hybrid cloud strategy to meet their diverse infrastructure and data needs. Hybrid cloud, which combines on-premises data centres with public cloud services, provides businesses with greater flexibility, scalability, and cost-efficiency. However, managing this complex environment can be challenging. What’s the secret to managing your Hybrid Cloud environment? Microsoft’s Azure Arc.

During this blog, we outline how Azure Arc, Microsoft’s solution for simplifying hybrid cloud management, comes into play.

The Challenge of Managing Hybrid Cloud

While hybrid cloud offers numerous advantages, it also introduces complexities that businesses must navigate carefully. Here are some of the challenges that we most commonly see from customers.

Managing hybrid cloud - challenges

Data Security and Compliance

With data spread across both on-premises and public cloud platforms, ensuring compliance with data protection regulations (such as GDPR, CLDC, etc.) becomes a major concern. Different regions may have varying rules regarding where and how data is stored and accessed.

Resource Visibility

As businesses scale their hybrid cloud environments, managing resources across multiple platforms can lead to a fragmented view of the entire infrastructure. This lack of visibility makes it difficult to monitor performance, manage costs, and troubleshoot issues effectively.

Consistency Across Environments

Managing workloads across different environments requires consistency in policies, configurations, and governance. Without a unified approach, companies risk running into compatibility issues or fragmented operations that affect performance.

Operational Overhead

As organisations adopt more hybrid and multi-cloud architectures, managing multiple platforms and tools can increase operational complexity. It often leads to inefficiencies and increased manual effort.

Security Management

Hybrid clouds introduce challenges around securing both on-premises and cloud resources. Protecting data, controlling access, and applying security policies consistently across environments require robust management tools.

Whilst it can be challenging to manage a Hybrid environment, many businesses need to keep these environment separate, so searching for ways to help support the management of their environments is a top priority for them.

Let’s dive into the secret to managing your hybrid cloud environment – Azure Arc.

What is Azure Arc?

Azure Arc is a set of technologies from Microsoft Azure that extends Azure’s management capabilities beyond Azure’s native cloud environment to on-premises, multi-cloud, and edge environments. It allows you to manage resources across a variety of environments, including physical servers, Kubernetes clusters, and databases, all from a single Azure interface. Essentially, Azure Arc makes managing hybrid and multi-cloud environments more streamlined and less fragmented.

By unifying the management of resources across different environments, Azure Arc provides a consistent management layer that integrates both cloud and on-premise infrastructure, ensuring security, compliance, and operational efficiency.

How Azure Arc Can Help with Managing Hybrid Cloud

Azure Arc offers a comprehensive solution to these challenges by bringing Azure’s capabilities to on-premises, multi-cloud, and edge environments, offering the following key benefits.

Managing hybrid cloud - Azure Arc benefits

Unified Management Across Environments

Azure Arc allows businesses to manage both their on-premises infrastructure and cloud resources from a single Azure portal. This unified management experience reduces the operational overhead and eliminates the complexity of managing multiple tools and platforms.

Consistent Security and Governance

With Azure Arc, businesses can apply consistent security policies, governance controls, and compliance standards across their hybrid environment. By extending Azure Security Centre and Azure Policy to non-Azure resources, businesses can ensure their infrastructure remains secure and compliant regardless of where it resides.

Simplified Resource Management

Azure Arc provides visibility into resources across different environments, offering businesses real-time insights into their hybrid infrastructure. This visibility simplifies monitoring, troubleshooting, and performance optimisation.

Flexibility with Kubernetes and Applications

Azure Arc enables businesses to manage Kubernetes clusters and deploy applications consistently across hybrid environments. Whether running workloads on Azure Kubernetes Service (AKS), on-premises, or in another cloud, Azure Arc allows organisations to manage and deploy applications uniformly.

Cost Optimisation

Azure Arc allows organisations to apply cost management and optimisation techniques across hybrid and multi-cloud environments, helping businesses maintain control over their cloud expenditure.

Enhanced Automation

By using Azure Arc, businesses can automate tasks such as patching, updates, and scaling across their infrastructure. Automation tools within Azure Arc help organisations maintain performance and reduce the manual intervention required to manage hybrid environments.

The complexities of managing hybrid cloud environments are undeniable, but Azure Arc provides a powerful solution to streamline and simplify this task. By offering unified management, consistent security, and the ability to govern resources across on-premises, multi-cloud, and edge environments, Azure Arc helps businesses reduce complexity, enhance operational efficiency, and ensure security and compliance.

For companies looking to optimise their hybrid cloud strategies and embrace the full potential of their infrastructure, Azure Arc is a game-changer. It allows businesses to focus more on innovation and growth, rather than being bogged down by the challenges of managing disparate systems and platforms.

Use Cases for adopting Azure Arc

It can be difficult to often imagine how technologies can work for your business, which is why we have pulled together some of our top use cases for leveraging Azure Arc.

Financial Services: Compliance and Data Security

A financial services firm operates across multiple regions and needs to maintain strict compliance with local data privacy laws. By using Azure Arc, the company can extend Azure’s security and compliance tools to its on-premises and other cloud environments, ensuring that all workloads adhere to the relevant regulations. Azure Arc’s unified management and governance capabilities also provide visibility into the firm’s entire infrastructure, helping IT teams monitor and secure sensitive financial data consistently.

Retail: Multi-Cloud Operations

A global retail company has a diverse IT infrastructure that spans multiple public cloud providers and on-premises data centres. With Azure Arc, the company can manage resources across multiple clouds and on-premises systems from a single portal. This simplifies operations, reduces complexity, and enables the company to adopt new technologies (like edge computing) more seamlessly, ensuring that its global e-commerce platform runs smoothly across all environments.

Manufacturing: Edge Computing for IoT

A manufacturing company relies on IoT devices to monitor production lines in remote locations. These devices need to process data locally for real-time decision-making but also require centralised management for software updates and data analytics. Azure Arc helps the company manage its edge infrastructure by enabling Kubernetes clusters and IoT solutions to be centrally governed, ensuring the smooth operation of the production lines while maintaining consistency with cloud applications and services.

Healthcare: Hybrid Cloud for Patient Data Management

A healthcare provider must store patient records in strict compliance with the Data Protection Act (DPA) 2018 and Common Law Duty of Confidentiality (CLDC). Healthcare organisations can adopt a hybrid cloud strategy, where patient records are stored on-premises, while non-sensitive applications are hosted in the cloud. With Azure Arc, the provider can apply consistent security policies and governance across both environments, ensuring that sensitive patient data remains protected and compliant.

As we step into 2025, the cloud landscape continues to evolve at a rapid pace, with Microsoft packing some exciting developments to look forward to. 

This year promises to be an exciting one, from the increasing adoption of AI to the continued focus on hybrid cloud solutions, sustainability insights, and migrations, there’s a lot going on. 

Cloud trends for 2025

Let’s dive into the top cloud trends for 2025. 

1. AI, AI, AI 

Artificial Intelligence is set to dominate the tech landscape in 2025, and Azure is no exception. As AI adoption increases, we can expect a greater reliance on best practice infrastructure and landing zones to support these workloads.  

This year, the focus will be on creating robust and scalable environments that can handle the demands of AI applications. The Azure OpenAI Landing Zone Reference Architecture is a key resource for organisations looking to implement AI solutions effectively. 

Check out that reference architecture here. 

2. Continued Focus on Hybrid Cloud Solutions 

Hybrid cloud solutions will continue to be a major focus for Azure in 2025. With the introduction of Azure Local enabled by Azure Arc, Microsoft is making it easier for businesses to manage their cloud and on-premises environments seamlessly.  

Azure Arc enables organisations to extend Azure management and services to any infrastructure, while Azure Local provides cloud infrastructure for distributed locations. These innovations are designed to provide greater flexibility and control for businesses, ensuring they can optimise their cloud strategies effectively 

3. Sustainability Insights 

Sustainability is becoming an increasingly important consideration for businesses, and Azure is key to helping organisations achieve their sustainability goals.

Sustainability is becoming an increasingly important consideration for businesses, and Azure is key to helping organisations achieve their sustainability goals. In 2025, we can expect to see improved visibility into sustainability insights, allowing customers to better understand the environmental impact of their Azure resources.  

Microsoft has already started building tools to provide these insights, and we anticipate further expansion of these capabilities this year. This focus on sustainability aligns with the growing demand for eco-friendly business practices and the need to maintain accountability to our ESG goals. 

4. Cloud Migrations 

The cloud trend of migrating to Azure shows no signs of slowing down in 2025. Many organisations are still in the process of moving their workloads to the cloud, and the pace of migrations is expected to continue at the same rate as in 2024. According to Gartner, more than 70% of companies have some cloud footprint, and over 80% are expected to adopt a cloud-first approach by 2025. 

This shift towards cloud-native platforms is driven by organisations’ need for greater agility, scalability, and cost-efficiency. 

Looking forward 

2025 is shaping up to be an exciting year for Microsoft Azure and its users. With advancements in AI, a continued focus on hybrid cloud solutions, enhanced sustainability insights, and ongoing migrations, Azure is poised to remain at the forefront of cloud innovation.  

As organisations continue to embrace digital transformation, Azure’s comprehensive suite of services and tools will play a crucial role in helping organisations achieve their goals. Stay tuned for more updates and get ready to harness the power of Azure in 2025!

If you are a VMware customer, you will have heard all about the acquisition of VMware by Broadcom. This takeover has created a lot of confusion and concern among VMware users with drastic changes to the VMware portfolio. In this blog, we look at what this acquisition means for your current workloads and explore the VMware alternatives available to you.

What Has Changed with VMware?

Broadcom’s acquisition of VMware has led to some major changes in its products and services that affect millions of customers worldwide. These changes are aimed at simplifying VMware’s portfolio, aligning with the industry trends, and proposedly offering more value and flexibility to customers.

However, they also pose some challenges and uncertainties for existing and potential VMware users, who need to understand how these changes will impact their IT environment and operations. Let’s take a look at the core changes.

Changes in VMware

Subscription licensing

One of the main changes that affect VMware customers is the transition from perpetual to subscription licensing, which means that you will have to pay a recurring fee to use VMware products and services. This can have a significant impact on your budget and IT strategy, especially if you are used to paying upfront for your licenses.

This shift to perpetual licensing also impacts support and subscription (SnS) renewals, Hybrid purchasing program (HPP) and Subscription purchasing program (SPP) credits.

It’s worth noting that customers can continue to use perpetual licenses that they’ve already purchased but after a customer’s effective end date, new subscription licences cannot be purchased.

Reduction of SKU Options

Another change is the reduction of SKU options to the two new subscription-based SKUs: VMware Cloud Foundation (VCF) and VMware vSphere Foundation (VVF). VCF is an enterprise-grade private cloud platform that includes vSphere, vSAN, NSX, and Aria suite of tools, while VVF is a basic on-premises offering that includes Aria Operations and Aria Operations for logs. These SKUs are now based on per-core licensing, which requires a minimum of 16 cores per socket and 2 sockets per host, affecting smaller organisations with increased costs that don’t need that quantity of resources.

Additionally, VMware has discontinued some products and features, such as ESXi Hypervisor Free Edition which was previously commonly used for training, testing, or POC purposes. For a full breakdown of the changes, see the table provided here.

End-User Computing (EUC) Division Sold

VMware has also sold off its EUC division to private equity firm KKR, which includes the VDI solution Horizon, and Workspace One an Endpoint management platform. Although KKR promises to provide further innovation and investment to the product, Horizon will likely struggle to maintain market share as flexible working requirements increase the demand for cloud-based VDI solutions such as Azure Virtual Desktop.

Invitation Only Partner Programme

VMware has become an invitation-only partner program, meaning that not all partners will be able to sell and support VMware products and services. This can create uncertainty and disruption for customers who rely on their existing partner relationships and want to continue working with them. Moreover, VMware support has migrated to Broadcom support portals, which may affect the quality and availability of technical assistance, we will have to wait and see on this one.

VMware Alternatives for Customers

VMware alternatives

As a customer who is affected by the recent changes, you may be wondering what the VMware alternatives are. Depending on your IT strategy, migration timeline, budget, and cloud skills, you have different routes to consider. Here are your possible options and their pros and cons.

  • Continue with VMware: You can choose to stay with VMware and adapt to the new subscription model, SKU options, partner program, and support portals.
    -This option may suit you if your pricing has not increased drastically, you’re satisfied with the VMware products and services, have no plans to migrate to the cloud, or require a hybrid setup.
    – However, this option will likely entail higher costs, licensing complexity, partner uncertainty, and IT strategy disruption.
  • Migrate Hypervisor: You can choose to migrate your workloads to a different hypervisor solution, such as Hyper-V, Nutanix, or Oracle VirtualBox.
    – This option may suit you if you want to reduce your dependency on VMware, lower your licensing costs, or leverage existing skills and tools.
    – However, this option may also involve significant migration efforts, compatibility issues, performance impacts, and operational changes. Therefore likely only a useful option if you have a relatively small digital estate.
  • Migrate to the cloud: You can choose to migrate your workloads to the cloud, either as native services or as VMware-based services.
    – This option may suit you if you want to take advantage of the cloud benefits, such as scalability, flexibility, security, innovation, and integration.
    – However, this option may also require some refactoring, rearchitecting, reskilling, and governance changes.

In the next section, we focus on the third VMware alternative, migrate to the cloud, and show you how you can use Azure as your destination platform.

Migrating your Workloads to Azure

Azure VMware Solution

One option for migrating your VMware workloads to Azure is Azure VMware Solution (AVS), a service that allows you to run VMware on dedicated Azure infrastructure managed by Microsoft. With AVS, you can leverage your existing VMware skills, processes and tools to manage and operate your VMware environment in Azure. You can also benefit from the scalability, security, and integration of Azure services, such as backup, monitoring, identity, and networking.

We will explore the full benefits and challenges of choosing AVS in our next blog.

Azure Native Services

The other possible route away from VMware is to migrate your workloads to Azure native services, such as IaaS Azure VMs or PaaS services such as Azure App Service, Azure SQL, Azure Files, and Azure Virtual Desktop. These services offer more scalability, flexibility, security, and integration with the Azure cloud platform, and can help you reduce costs, improve performance, remove management overhead and modernise your applications.

While AVS can offer a quick and easy way to migrate your VMware workloads to Azure, it isn’t the best long-term solution for your organisation. AVS still relies on the same VMware stack that runs on-premises, which means you will miss out on some of the advanced features and capabilities that Azure native services provide, such as autoscaling, serverless computing, AI and analytics, DevOps integration, and more. By migrating to Azure native services you can move along your migration journey and modernise your applications and data, making them more secure, resilient, and agile.

Transparity as your Migration Partner

If you’ve looked at the VMware alternatives and decided the cloud is right for you, we can help. Whether you want to migrate to AVS, jump straight to Azure native or just aren’t sure yet, we can help you with your migration journey. Transparity offers a range of services and solutions, including:

  • VMware Migration Assessment: We can help you assess your current VMware environment and identify the best destination for your workloads. We can also provide you with a detailed business case and cost analysis, showing you the potential savings and benefits of moving to Azure.
  • Azure Migration Service: We can help you plan, execute, and validate your migration to Azure, using proven methodologies and best practices based on the Cloud Adoption Framework. We can also help you optimise your Azure environment and leverage the full potential of the cloud platform.
  • Azure Managed Service: We can help you manage and monitor your Azure native services, ensuring their availability, reliability, and performance. We can also help you implement backup, disaster recovery, and security solutions, as well as provide you with ongoing support and guidance.
  • Azure Skilling Service: We can help you upskill your team and equip them with the knowledge and competencies to use Azure native services. We can also help you adopt a cloud-first mindset and culture, and align your IT strategy with your business goals.
  • Access to Microsoft Funding: As an Azure Expert MSP, we can also help you access the Azure Migrate and Modernise program, which provides funding and upskilling for eligible customers making the move to Azure.

Free VMWare Rapid Migration Assessment

We promise to find you the best migration route for your workloads. If you end up with AVS, we can offer you:

  • 5-year fixed pricing
  • A rapid migration
  • Technology your team understand
  • 3-year free security updates

Enquire today

Introduction

The Microsoft Ignite conference is now over and as always we were treated to a host of new features across all of the Microsoft products. Going into the event you wouldn’t be blamed for thinking the only subject you would be hearing about is artificial intelligence. AI was of course the centrepiece to Ignite, but amongst the fantastic AI news was also some great new feature announcements relevant to the world of Azure and Infrastructure. In this blog we will cover our top 5 announcements from Ignite.

Copilot for Azure

We knew it was only a matter of time but at Ignite Microsoft finally announced Copilot for Azure. Copilot for Azure is an AI companion IT professionals can use to design, operate, troubleshoot and optimise their Azure environment and workloads. It provides you with the ability to ask questions in natural language and in return Copilot can answer the question, run queries or perform tasks in a safe manner on your behalf. The solution utilises Large Language Models (LLMs) to interpret and analyse Azure Resource Manager (ARM), Azure Resource Graph (ARG), Cost Information, Product Documentation, Support, Best Practice guidance and more. As Copilot has access to the Azure control plan, it can be used for a host of IT management tasks, including:
  • Understand your environment more effectively: Retrieve resource information, author graph queries and analyse costs and understand health events alongside current issues.
  • Do more with less: Deploy workloads, build infrastructure, secure and protect accounts.
  • Write and optimise code: Generate CLI scripts, author API policies, generate Kubernetes YAML files and discover performance recommendations that include code optimisations.
  • Best Practice & Design Information: Learn which services are best suited to your workload, understand security or scaling features and learn more about all features in Azure.
All of the above is accessed via Copilot from anywhere within the Azure portal. It’s worth noting Copilot for Azure isn’t just for your workloads and infrastructure that resides in Azure, it can also be used for Hybrid workloads that are currently Azure Arc-enabled workloads. An obvious worry for many people looking to utilise this tool would be security and permissions. Providing an AI service with the power to view and manage your cloud IT estate does at first sound very scary but as with many Microsoft products, security is a top priority. When developing the service Microsoft were guided by their AI principals and all features carried out are done within your own organisation’s security, governance and privacy policies. Copilot itself doesn’t have an identity from a security perspective and all tasks performed use the permissions of the current user via RBAC. It’s also worth noting Copilot language models are not trained on your tenant data. NOTE: This feature is currently in preview and requires sign up.

Azure Migrate New Features

Azure Migrate is a widely used native tool useful for workload discovery, assessment and migration. This essential migration tool also benefitted from a number of new features announced at Ignite:
  • App & Code Assessment: This new capability allows you to assess .NET and Java applications at a code level in order to provide you with guidance on how to migrate or re-platform the application. The assessment output allows you to understand an app’s compatibility with target services such as App Service, Spring Apps and more. This is a fantastic new native feature that will provide insights, reduce issues and accelerate your application migrations.
  • New Workloads: You can now assess readiness and estimate costs for Spring Web Apps to Azure Spring Apps and also ASP.NET on IIS to App Service Container. This builds upon the current assessment capabilities for VMs, App Service, SQL and more.
  • TCO Business Case Additions: Prior to this new update, a business case assessment in Azure was great at showing a predicted TCO for moving workloads to Azure. The problem was it didn’t provide the full picture as it didn’t take into account any management services. As of December 2023 this has now changed, a business case now includes cost predictions for management services such as Azure Backup, Update Manager and Azure Monitor costs. Although this is still not perfect, it’s a great step towards a reliable TCO prediction.
This is one of the biggest updates we have seen to Azure Migrate in some time and it’s really good to see Microsoft continue to invest in such an important tool.

Virtual Machine Hibernation

This isn’t one of the most groundbreaking announcements at Ignite, but it could be one of the most utilised. A common way to reduce and optimise costs for Virtual Machines (VMs) is to deallocate them when they are not used in order to save consumption costs. Automating the shutdown and startup of VMs including those used for VDI purposes is an extremely common task in Azure. The issue with deallocating a VM is loss of the VMs in-memory state meaning all services and apps needing starting again from a clean boot. Microsoft have now announced the availability of Virtual Machine hibernation. By hibernating a machine it still deallocates the VM to save costs but it also persists the in-memory state meaning, when the VM is later started, all the apps and processes will resume from their previous state. It performs this task by signalling the OS to perform a suspend-to-disk operation at shutdown which stores the VMs memory on the OS disk. One obvious use case for VM hibernation is the ability to prewarm machines that have long startup times so they can be quickly started and be ready when required. VM Hibernation can also be used for Azure Virtual Desktop (AVD) and Citrix DaaS for Azure. The AVD option is where we think hibernation is a great use case. In AVD we now have the ability to start and stop personal desktop session hosts automatically via Scaling Plans. We now have the Hibernation option in Scaling Plans instead of Deallocate. This means when the VM starts up the next day, all the users apps will be open and exactly where they left them. Cost savings and increased productivity all in one simple update. Hibernation is available in all public regions for General Purpose Intel and AMD VM sizes running both Windows and Linux. NOTE: As with VM deallocation, VM hibernation only saves on VM costs, you will still be charged for disks and networking resources such as Public IP Addresses.

Azure Chaos Studio

Azure Chaos Studio is a managed service that uses chaos engineering to purposely inject faults into an application or infrastructure to better understand the resilience of an application. Traditionally, chaos engineering and this type of fault injection is something that was largely done against code by developers. Azure Chaos Studio takes that further by enabling you to inject faults into the infrastructure so you can test an app’s resilience against a real-world fault. Some example fault injections you can use include:
  • Increase CPU and Memory pressure
  • Stop Windows or Linux Service and Kill processes
  • Create DNS Failures
  • Increase network latency, disconnect networks or increase packet loss
  • Shutdown Virtual Machines, App Services and Databases
  • Load testing
You can already start to see how useful the fault injections will be when looking to test an application’s resilience. You also have the ability to integrate the service with Azure Monitor, Application insights or Log analytics to visualize your experiments and results.

Microsoft Azure Chips

Rumours have been going around for a while but at Ignite Microsoft confirmed they have been working on their own silicon chips. Microsoft announced two new chips, the first is Microsoft Azure Maia which as an AI Accelerator chip designed to run cloud-based training and inferencing for AI workloads. The second is Microsoft Azure Cobalt, a cloud-native ARM architecture chip optimised for performance and cost effectiveness for general purpose workloads. Custom silicon from Microsoft makes perfect sense as the usage of AI starts to skyrocket, it’s worth noting Microsoft are not turning their back on their current silicon providers, in fact they have recently extended their partnerships with AMD to provide new VM sizes fit for AI.

Additional Announcements

In addition to our top 5 announcements, Microsoft released a few more features we think deserve a mention:
  • AVD: Auto-Scale for personal desktops and support for SSO and passwordless authentication
  • VMSS: You can now attach a VM to an existing Virtual Machine Scale Set
  • SQL Managed Instance: Start / Stop SQL MI when not used to save on billing
  • Private Subnet: Create subnets that prevent insecure outbound Public IP creations
As always Microsoft released a host of other new features, for more information check out the official Microsoft Ignite 2023 Book of News. If you want further information or you think one of the features listed in this blog is useful to your business, then feel free to reach out to the Azure Cloud Experts at Transparity who would be happy to discuss any new feature.

Overview – what’s happening?

Windows Server 2012 Operating System has been out now for over 10 years, and its R2 version coming close to that 10 year period. If you’re still running these Operating Systems for your workloads, you need to be aware that on October 10th 2023 Microsoft will end support for this OS. This follows the end of support date for SQL 2012, which was July 12th 2022. Once the end of support date has passed, you will no longer receive security updates, standard updates, bug fixes or technical support. Any servers running this OS will be severely impacted in regards to security posture and your ability to resolve complex issues.

Extended Security Updates

Since the release of Server 2012 R2, Microsoft have released three main Server OS versions including the more recent Server 2022 which was released towards the end of 2021. Ideally you will be running a newer version of the OS already but if that’s not the case, don’t worry Microsoft have a solution. You can continue to protect your workloads and receive support by applying an ESU (Extended Security Update). An ESU will provide ‘Critical’ and ‘Important’ patches for your workloads for three additional years. If you have an active support contract, that can also be used for the additional three years. It’s worth noting ESUs do not include new features, non-security hotfixes, or design change requests. However, Microsoft may include non-security fixes as deemed necessary.

What are your options?

You have three main options at your disposal if you need to cover your 2012/R2 OS and ensure you still receive updates and support: Option 1: Migrate to Azure and receive free ESU If you migrate your workloads to Azure, or if they already  exist in Azure you will benefit from the additional three years ESU completely free of charge. This is applicable to the following services:     
  • Azure Virtual Machines
  • Azure Dedicated Host
  • Azure VMWare Solution
  • Azure Stack (Hub, Edge, HCI)
If you are already thinking of migrating to Azure this is a simple step that gives you time before deciding to upgrade the OS or move to a PaaS service which removes the management of the OS completely. It’s also worth noting Azure Migrate, the tool used for migrating workloads to Virtual Machines now includes the ability to do an in-place upgrade during the migration process. Option 2: Remain on-premises and upgrade the Operating System If you do need to remain on-premises you can of course look to upgrade all of your servers to a newer OS that is going to be supported by Microsoft past October 10th 2023. Option 3: Remain on-premises and purchase Extended Security Updates If you wish to remain on-premises and not upgrade the OS, you will need to purchase ESUs to remain secure and supported until October 13th 2026. Purchased ESUs will continue for three years and be renewable on an annual basis. Eligible customers will be able to purchase ESUs that are sold in 16 core packs for the three years and are provided via volume licensing using a license key. The prices are shown below: Year 1: 100% of full license price annually Year 2: 100% of full license price annually Year 3: 100% of full license price annually If you wait for year two to buy the ESU you will need to pay for 2 years’ worth and if you wait for year three you have to pay all three years upfront. TOP TIP: Microsoft recently announced the ability to purchase ESUs for on-premises workloads using Azure ARC. This allows you to be billed via the Azure portal on a monthly basis, instead of annual billing without the need for a key. Azure Arc also has the added benefit of surfacing your on-premises workloads in Azure where you can monitor, patch and secure your on-premise workloads via a single pane of glass. This is a great option for hybrid environments or for workloads that will soon be moving to Azure but after the EOL deadline.

Windows Server 2008 and 2008 R2

Windows Server 2008 also went out of support a few years ago and at the time many of the options above were available then. Be aware, if you did apply an ESU for Windows Server 2008 or if you migrated the server to Azure, that three years extended support will end on January 9th 2024. After this point, You will have no other way to protect you servers outside of an upgrade to a newer OS.

What is Azure Virtual Desktop?

Azure Virtual Desktop (AVD) is a Desktop and App virtualisation service that runs completely in the Azure public Cloud. Virtualisation is a method of streaming a user’s desktop or application remotely to a device so it can be accessed anywhere. Prior to cloud computing in an on-premises scenario, this was often achieved using technologies such as Remote Desktop Services or Citrix.

In a post-pandemic world where remote working is common, AVD allows users to access work-related applications from their own devices anywhere in the world. With the end of support for Windows 10 fast approaching, AVD stands out as a practical and timely solution for organisations needing to transition to a modern, secure platform without immediate hardware replacement investments.

The Benefits of Azure Virtual Desktop

Key benefits of azure virtual desktop

Since its release in late 2019, AVD has continued to add new features and capabilities which has helped it become an extremely useful tool for business and enterprise users all around the world. Some of the key benefits of the solution include:

  • Windows 10, 11 and Server OS: It delivers Windows 10, 11 and Windows Server applications and desktops virtually anywhere in the world on Windows, macOS, Android, web clients and more.

  • Windows Multi-Session + FSLogix: Reduces costs by pooling users on single hosts using Windows 10 and 11 Multi-Session capabilities. FSLogix can be used to provide roaming profiles on multi-session hosts using Azure Storage. You also have the ability to provide personal persistent desktops to meet specific use cases.

  • Licensing: Use existing eligible licenses such as Microsoft Enterprise E3, E5, Microsoft 365 E3, E5, A3, A5, F3 and more.

  • Reduce management overheads: Deploy and scale in minutes without the need for Brokers, Gateways or Load balancing traditionally found in legacy VDI solutions, which in turn reduces the management overhead. With AVD you only need to manage images, session hosts, licensing, and identities.

  • Simplified image management: Bring your own unique images to meet your application demands or choose pre-built images from a gallery including pre-configured images for use in Windows Office Applications.

  • Auto-scaling: Built-in autoscaling and shutdown features drive cost efficiencies when workloads are not being used.

The Benefits of Azure Virtual Desktop for Windows 10 End of Support

With the end of support for Windows 10 fast approaching, now is the time for organisations to choose between migrating to Windows 11 or an alternate option.

AVD is a compelling solution for organisations transitioning away from Windows 10. One of its key advantages is the ability to migrate seamlessly to Windows 11 without the need for costly hardware upgrades, enabling organisations to optimise their existing assets while embracing modern technology. Furthermore, AVD’s flexibility in delivering virtual desktops and applications to diverse devices ensures continuity for remote workers and teams spread across different regions.

In addition to its operational benefits, AVD prioritises compliance and security by offering secure, virtualised access to applications and desktops. As the end of support for Windows 10 approaches, this becomes especially critical, ensuring that organisations are not exposed to vulnerabilities associated with unsupported systems. With built-in security features such as identity management, data encryption, and regular updates, AVD provides a robust environment that aligns with regulatory requirements and protects sensitive information, making it an ideal solution during this transitional period.

For organisations looking to stay agile and competitive as Windows 10 reaches the end of its lifecycle, AVD provides a future-ready platform that combines scalability, performance, compliance, and security with all of the benefits listed in the above section as key contributing factors.

Azure Virtual Desktop Use Cases

Azure Virtual Desktop Use cases

As AVD is provided via various licensing options including the ability to provide external user access via per-user access pricing, the service has a number of use cases, including:

  • End-of-support transition: As the end of support for Windows 10 approaches, organisations requiring a quick or cost-effective solution to move away from the unsupported system can deploy AVD, bypassing the need for extensive hardware upgrades and lengthy migration processes.
  • Remote working: The most obvious use case is the ability to allow your employees to work from home or anywhere in the world from their own device or a company-provided device whilst still getting the same experience they would from an office without the need to procure expensive hardware.

  • Elastic workforces: AVD is great for elastic workforce requirements such as remote work, mergers, acquisitions, short-term employees, contractors and partner access.

  • Specialised workloads: Via personal desktops, high-performance compute and options such as multi-screen support, AVD can support specialised workloads such as software development, financial applications, 3D modelling, graphics design, CAD and much more.

  • Migrate to the cloud: Simplify your migration from traditional VDI solutions such as RDS and Citrix by bringing your current images into Azure and running them in AVD session hosts.

  • Stream apps to external users: One benefit to the per-user external licensing option is businesses of all sizes now have the option to use AVD to stream their own applications to external users and customers in scenarios where traditionally, the customers would need to host hardware infrastructure to access the application. See a case study of this use case in action.

Optimising for Performance

Azure Virtual desktop performance optimisation

The planning and design phase of any AVD deployment is important to ensure the user experience is as good as possible. The apps and desktop should feel like they are running locally to the user and not from the cloud. There are a few performance-related areas to think about when trying to improve the user experience:

  • Location: The region of the Azure AVD Session Hosts and therefore latency will contribute to the overall user experience. This depends largely on where users are based. Plan for your user base, understand locations and use services such as log analytics to view connection quality data for your users.

  • Sizing: Sizing VMs in Azure is a balancing act between performance and cost. One big mistake would be to under-provision compute power in a way that greatly decreases the overall performance. Planning and analytics on your users’ current usage will help you plan accordingly.

  • App/OS optimisation: When creating images and installing software for AVD hosts, be aware that not all applications will be optimised out of the box. For example, Microsoft Teams requires a number of customisations and registry edits to optimise the overall experience.

  • Testing: Take time to test and run POCs before deploying at a large scale. Use the information you have from current usage, user personas and app requirements to help the design decisions early on.

Optimising for Cost

Azure Virtual Desktop cost optimisation

As with many solutions in the cloud, performance is important. However, performance needs to be balanced with cost. As previously covered, the cost for users in AVD is via licensing, many of which organisations will already be using today. An additional cost comes from the AVD service and infrastructure running in Azure, which includes Sessions Hosts, Storage, Logging, Backups and Networking.

Licencing costs are easier to predict than the infrastructure, but with correct planning and an understanding of the available features you can optimise for costs. Some key areas to think about when balancing cost with performance include:  

  • Multi-Session: With AVD you have the ability to spread multiple users on Windows 10/11 hosts using multi-session capabilities. Be sure this meets the requirements of the users and apps. In many cases, multi-session is possible and will save costs, not just on the number of session hosts but also by reducing management overhead.

  • Scaling plan: The Azure Virtual Machine session hosts will contribute to at least 70% of your overall Azure infrastructure costs. Therefore, controlling the number of session hosts will greatly optimise your spend on AVD. Scaling Plans give you the ability to scale session hosts either up or down based on the time of the day, days of the week or session limits. Configuring this correctly will ensure you are providing just the exact amount of compute required and thus optimising costs.

  • Auto-shutdown: For simpler AVD deployments with more predictable working hours, shutting down the Virtual Machines will help with costs. A business requiring a small number of session hosts with a usage time of 9-5 Mon to Friday will benefit from simply switching the hosts off during out of hours. More complex environments would likely need Scaling Plans.

  • Start VM on connect: One handy feature to optimise the user experience alongside the reduction of costs is ‘Start VM on Connect’. This allows users to turn on their session hosts only when they need them. If a user does want to work over a weekend or in the evening, this feature will allow you to schedule shutdown but still provide a method of accessing the desktop. Obviously, an increased connection time is needed as it waits to start the VM but the users are made aware of this when connecting.

  • Right-Sizing: Session hosts sizing and the compute resources (vCPU/Memory) you assign to each user largely depend on workload requirements and specific use cases. However, as with all VMs in Azure, ensure you size them correctly and try to avoid over-provisioning compute resources.

    This becomes especially important when delivering personal desktops to your users instead of pooled as the number of session hosts you require to manage and therefore pay for increases significantly. Before deploying, estimate resource requirements based on current usage. Once you have provided a host, use logging and monitoring to understand real-world compute requirements and re-size accordingly.
  • Reserved Instances / Savings plans: As with any VM in Azure, Sessions Hosts gain huge cost savings via Reserved Instances or Saving Plans. Reserved Instances (RI) allow you to save by committing to spend over a 1 or 3-year period.

    Saving Plans allow the reduction in cost by committing to spending a fixed hourly rate over a 1 or 3-year period. Using the above plans can save up to a whopping 72%. As always with these offers, it doesn’t always stack up on top of other cost-saving optimisations, especially reserved instances. Ensure the plans make sense, they are largely aimed at resources that run 24 hours per day and with no plans to be decommissioned in the next few years. For example, if a VM is assigned an RI you won’t benefit from shutting it down as you’ve already committed to the spend on it running 24/7. RI’s benefit business whereby session hosts are always going to run.

Four Key Takeaways

Azure Virtual Desktop has become an extremely popular service over the last few years and with the increase in remote working, this isn’t likely to change. As discussed in this blog, the increase in performance alongside the reduction of management overheads is significant compared to traditional VDI solutions, thus making this a no-brainer for companies looking to empower their users via the public cloud. To conclude this blog post, some key takeaways for AVD include:

  1. It scales easily: The ability to scale on-demand and save costs is one of the biggest drivers for organisations moving to the public cloud and AVD is no exception. Utilise AVD scaling plans to optimise costs and user experience.

  2. Understand your users: Although arguably relevant to any technical deployment, ensure you really understand your users and their individual workload requirements to really benefit from everything AVD has to offer. Use monitoring and logging information to drive decisions and changes to the deployment.

  3. You could already be licensed: Many of the organisations Transparity speaks to already have AVD-eligible user licensing in place. You can be up and running with a POC in no time and test drive the service yourself.

  4. Reduce management overhead: With zero requirements for management brokers, licensing servers or load balancing, the reduction of management overhead is huge compared to traditional on-premises VDI solutions such as RDS or Citrix. Let your IT teams pivot to focus on adding value and improving user experience instead of keeping the service up and running.

Azure Virtual Desktop with Transparity's Azure Experts

Azure Virtual desktop set up for success

With a team of experts dedicated to infrastructure and the Azure cloud, our specialists are knowledgeable and  experienced and at implementing Azure Virtual Desktop. If you are wondering if this is right for you, why not get in touch and find out how we can help.

Or check out our Azure Virtual desktop case studies:

Why is FinOps Essential for Your Organisation?

FinOps, as a term, is a combination of Finance and DevOps, as it brings together the business and engineering sides of an organisation for collaboration. Often thought of as Financial Operations, it is better defined as a cloud financial management discipline and culture.

In our last blog post, Introducing FinOps, we went over the six key principles of FinOps and the three phases of FinOps management: Inform, Optimise and Operate. To explore those themes and some tips straight from our Azure Expert, Anthony Cooke, take a read of that post, here.

In this post, we are going to look at why you should implement FinOps along with some specific headwinds of 2023, making it a topic worth investigating. As well as some of the Microsoft tools available for cost-savings and implementation of an effective FinOps strategy in your organisation.

FinOps combining Finance and DevOps

Why do I Need FinOps for Cloud Cost-Optimisation?

Cloud adoption and the usage of cloud technologies continue to grow across all organisation sizes and markets. One of the biggest challenges a company faces when adopting cloud technologies is switching from a traditional CapEX (Capital Expenditure) spending model to OpEx (Operating Expenditure), meaning cloud spend is tied to day-to-day operational spending instead of larger one-time purchases every few months or years.

The change in cost model introduces its own challenges for organisations, including unpredictable bills, spiralling costs, and cost inefficiencies due to waste. Due to the aforementioned challenges, it’s important that cloud spend is monitored and controlled correctly and that’s where FinOps can help. 

A Quick Recap – What is FinOps?

FinOps is a set of practices and processes aimed at optimising cloud costs and maximising the value delivered by cloud services. FinOps combines principles of financial management, cloud operations, and cloud governance to help organizations understand and manage the financial impact of their cloud usage.

The goal of FinOps is to create a culture of accountability and collaboration between IT, finance, and business teams, allowing them to make data-driven decisions about cloud usage, optimise spending, and align cloud resources with business objectives. FinOps involves monitoring and analysing cloud costs, identifying cost drivers, and implementing policies and tools to optimise cloud usage and reduce waste.

In short, FinOps is a framework that helps organizations manage their cloud costs effectively, enabling them to make informed decisions about cloud investments and usage, and ensuring that they are getting the most value from their cloud resources.

Why Should I Implement FinOps?

With pressing assignments, underway projects and a task list up to the roof, you may be wondering why you should use your valuable time to focus on FinOps.

If you are like many organisations, you have been steadily moving your workloads to the cloud, partly for the opportunity for innovation it provides and partly for the promise of cost-savings over the traditional method of on-premises infrastructure. However, though the security, scalability and avenues for innovation have certainly delivered, the promised savings may not be in sight. If you’re overspending, looking at budget caps in your rearview mirror and wondering where the unplanned for costs are coming from, you are not alone.

This is exactly what many organisations are experiencing and with cloud usage decentralised across departments, it can be difficult to isolate where the overspend is occurring and who is responsible for resolving it.

FinOps is designed to help disparate teams speak the same language and manage costs without limiting or adding obstacles to the cloud. After all, there is no point in implementing this if it negates the innovation benefits of being in the cloud in the first place. If you want to continue with cloud uptake, usage, and innovation, then FinOps is a necessity, not a luxury.

Why FinOps is something to focus on in 2023:

Though cloud financial management is never going to be unnecessary, with certain economic factors this year and a change in IT budgets and spending now is a very appropriate moment to be focussing on this topic.

Why focus on FinOps in 2023
  1. Cloud adoption is increasing: More and more organizations are moving to the cloud to take advantage of its scalability, flexibility, and cost-effectiveness. However, with increased cloud usage comes increased cloud costs, which can quickly add up if not managed properly. FinOps helps organizations keep cloud costs under control, ensuring that they are getting the most value from their cloud investments and not just swapping one inefficient cost for another.
  2. Economic uncertainty: The COVID-19 pandemic led to economic uncertainty and budget cuts for many organizations. FinOps can help organizations optimise their cloud spending, reducing costs and freeing up resources for other critical business initiatives.
  3. Cloud cost complexity: Cloud cost structures can be complex and difficult to understand, with multiple pricing models, usage tiers, and available discounts. FinOps provides a framework for analysing and optimizing cloud costs, helping organizations navigate the complexities of cloud pricing.
  4. Multi-cloud environments: Many organizations are using multiple cloud providers to take advantage of the unique features and services offered by each provider. However, managing costs across multiple cloud providers can be challenging. FinOps provides a standardised approach to managing cloud costs, regardless of the cloud provider.
  5. Increased focus on sustainability: As organizations become more aware of the environmental impact of their operations, they are looking for ways to reduce their carbon footprint. FinOps can help organizations optimise their cloud usage to minimize energy consumption and reduce their overall environmental impact.

Utilising Microsoft Azure for FinOps:

The range of discounts and tools available for cloud cost savings and optimisation is vast. Microsoft Azure alone provides a comprehensive set of tools and services to help organizations implement FinOps practices and if you are not making use of these, just this missing step alone is money going down the drain.

As a Microsoft Partner, we’d like to highlight just some of what is to offer for managing your cloud costs effectively within Azure. Here are some of the key features and capabilities of Microsoft Azure for FinOps:

Key Features in Microsoft Azure for FinOps
  • Azure Cost Management and Billing: Azure Cost Management and Billing is a tool that provides visibility and insights into your Azure spend, allowing you to monitor and optimise your cloud costs. It provides detailed cost analysis and reporting, budgeting and alerting capabilities, and recommendations for cost optimisation.
  • Azure Advisor: Azure Advisor is a free service that provides personalised recommendations for optimising your Azure resources. It provides recommendations for cost optimization, security, performance, and availability.
  • Azure Reservation Pricing: Azure Reservation Pricing allows you to save money on your Azure usage by pre-paying for compute resources. It provides significant discounts compared to pay-as-you-go pricing.
  • Azure Hybrid Benefit: Azure Hybrid Benefit allows you to save money on your Azure usage by using your existing on-premises licenses for certain Microsoft software products. It provides considerable cost savings compared to using Azure’s pay-as-you-go licensing model.
  • Azure Spot Virtual Machines: Azure Spot Virtual Machines allow you to take advantage of unused Azure capacity at a notably reduced cost. This can provide cost savings for non-critical workloads and help to optimise your cloud spending.

Overall, Microsoft Azure provides a robust set of tools and services to help organisations implement FinOps practices and manage cloud costs effectively. By leveraging these tools and services, organisations can optimise their cloud usage, reduce costs, and get the most value from their cloud investments.

There are many aspects to FinOps – remember it is a cultural practice, so the use of these tools alone is just one angle to address. Nevertheless, it can be a great place to start for quick cost-saving wins.

Transparity’s Azure Practice and FinOps

With a team of experts dedicated to infrastructure and the Azure cloud, our specialists have designed a full offering to help any organisation, no matter at what place in the journey, understand and implement FinOps. If you are interested in cost-optimisation and better cloud financial management, why not get in touch and find out how we can help.

Introducing FinOps – Cloud Cost-Optimisation

The FinOps Foundation defines FinOps as “An evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology, and business teams to collaborate on data-driven spending decisions.”

FinOps is not a specific technology or single process, it’s a cultural practice whereby everyone involved in cloud usage takes ownership, control, and accountability for cloud spend. Engineers, infrastructure teams, product owners, executives, finance and operations all come together following the FinOps culture in order to improve cost efficiencies, cost visibility and cost predictability going forward. With the overall goal of cloud cost-optimisation.

Why do I Need FinOps for Cloud Cost-Optimisation?

Cloud adoption and the usage of cloud technologies continue to grow across all organisation sizes and markets. One of the biggest changes a company faces when adopting cloud technologies is switching from a traditional CapEX (Capital Expenditure) spending model to OpEx (Operating Expenditure), meaning cloud spend is tied to day-to-day operational spending instead of larger one-time purchases every few months or years.

The change in cost model introduces its own challenges for organisations, including unpredictable bills, spiralling costs, and cost inefficiencies due to waste. Due to the aforementioned challenges, it’s important that cloud spend is monitored and controlled correctly and that’s where FinOps can help. 

An Introduction to the FinOps Principals

The FinOps culture and methodology are guided by six core principles. All of which are equally as important and should be practiced by everyone responsible for cloud spend. Let’s take a look at each one.

The Six Principles of FinOps culture

1. Collaboration

Everyone within a business needs to work together and collaborate to monitor, control and improve cost optimisation. FinOps is not just a focus for the finance team.

2. Ownership

Everyone must take ownership of their cloud usage. Product teams should budget and own the cost of their specific products. Engineers and architects must think about costs when designing and building workloads. Software developers should see cloud costs as a trackable metric that’s just as important as other metrics such as performance.

3. FinOps Team

Create a specific centralised FinOps team within your business that can focus on FinOps best practices, and processes and encourage others within the business to focus on their specific FinOps tasks. Similar to governance or security teams whereby they own the processes and best practices, yet everyone within the business is responsible for their specific tasks.

The FinOps team would usually focus on areas such as process creation, utilisation, discounts, negotiations, customisations and licensing.

4. Accessible and Timely Reporting

It’s imperative to have visibility of cost data that is updated in real-time and accessible by everyone at all levels of the business. This helps to drive efficiency and cost-optimisation whilst empowering team members to make decisions quickly.

Alongside cost data, the visibility of resource changes and cloud service activity logs is also very useful as this can explain why a specific cost has increased and assists with accountability.  

5. Business Value of the Cloud

FinOps and cloud solution cost-saving exercises should always be balanced against your business decisions, desired outcomes and reasons for adopting cloud technologies in the first place. It would be very easy to cut costs from any cloud bill if you blindly ignored the negative effects it would have on performance, speed, agility or user experience.

Be practical with your decisions and technical designs to ensure a workload first and foremost meets your required outcomes. Then see how you can optimise costs within those parameters.

6. Take advantage of the cost model

View the OpEx cost model as an opportunity to add value and increase agility instead of something that is a blocker or added risk.

Implementing FinOps

To implement FinOps within your business, you must think of it as a lifecycle that covers three specific phases of FinOps Management: 

1. Inform

The first phase of the FinOps lifecycle is ‘Inform’ and this is all about understanding and reporting on your current costs in order to use the information to inform the rest of the business. The more you know about your current spending and cost control, the more informed you will be going forward to make the right changes and decisions to optimise costs.

The key areas to think about include budgeting, forecasting, allocation and visibility. Microsoft Azure includes a cost management service that will assist with many of the key areas of focus including reporting. However, you must ensure you are tagging resources with key information such as owner, product or cost code. The more information you assign to a cloud service, the easier it will be to report and in turn, inform the rest of the business of the gathered information.

2. Optimise

The ‘Optimise’ phase is where you start to use the information you’ve previously gathered and take actions that have a tangible effect on your cloud spend. A few key areas of focus include:

  • Use performance metrics and reporting to right-Size resources
  • Automate shutdown and startup of resources
  • Utilise licensing benefits such as the Microsoft ‘Hybrid Use Benefit’ where applicable
  • Commit to long-term usage and reduce costs using options such as Reserved Instances
  • Continuously research new ways to save costs as cloud providers change or modify services

3. Operate

The third stage of the lifecycle is ‘Operate’. Operationally you should be looking to continuously improve your FinOps culture and monitor against the goals and processes you have previously defined. Set up meetings between FinOps team members, measure against set KPIs and document improvements that are visible to everyone. Work collectively and improve the culture as you work through the cycles.

As with any introduction of a new culture within a business, don’t expect it to be instantly perfect. Actually, it’s advised to take a “Crawl, Walk, Run” approach when implementing FinOps within an organisation. Take small steps and focus on specific scopes at the beginning to test your processes and approach. As the maturity grows, scale out what does work for you and improve what isn’t working.

Top 5 FinOps Tips for achieving Cloud Cost-Optimisation

Finally, I will leave with my top 5 tips for FinOps:

Our Five FinOps Tips
  1. Tag Resources: Reporting is only useful if it provides relevant information. Tag resources with useful information such as owner, cost code or product and then use this information in cost reporting to understand allocation and areas for improvement.
  2. Set Budgets: Understand your specific budget per workload and then set up alerts to ensure you’re notified if your budget has been exceeded.
  3. Balance Costs: Balance costs against the value cloud services can provide to your business. Get the balance right between cost, performance, agility, speed and maintenance overheads. Costs are not always the most important thing for a business.
  4. Start Slow: FinOps is a culture and not something that can be learned and implemented in a matter of hours or days. Go through the FinOps phases and look to continuously improve and learn from the process.
  5. Automate to Save Costs: Understand the usage of all your cloud services and where possible automate downsizing or shutdowns to drive efficiencies with your cloud spend. If a workload isn’t being used, you shouldn’t be paying for it.

Transparity’s Azure Practice and FinOps

With a team of experts dedicated to infrastructure and the Azure cloud, our specialists have designed a full offering to help any organisation, no matter at what place in the journey, understand and implement FinOps. If you are interested in cost-optimisation and better cloud financial management, why not get in touch and find out how we can help.

The Azure Well-Architected Framework is a set of guidelines spanning five key pillars that can be used to optimise your workloads. In the previous blogs, we covered Reliability, Security, Cost Optimisation and more recently Operational Excellence. This time we will focus on Performance Efficiency, which is the fifth and final pillar of the framework. 

Overview of Performance Efficiency

Prior to the age of cloud computing, measuring and scaling performance was an extremely important factor in managing applications and workloads. To ensure sites and services could handle increases in load and traffic, it was very common to overprovision hardware in order to handle the spike in demands. Although this would ensure business requirements could be met, it wasn’t a very cost-effective solution. However, since the start of Cloud Computing was of the biggest drivers for adopting cloud solutions is its ability to scale on demand whilst keeping costs down. Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner.

Although many cloud services offer some degree of Performance Efficiency out of the box, as with on-premises systems you still have to manage, test and monitor your workloads to get the best out of the solutions available.

A Well-Architected workload viewed through the lens of Performance Efficiency is a workload that is designed in a way that improves performance whilst ensuring it can scale to meet users’ demands. Design patterns and possible trade-offs against security, cost and operability also need to be considered.

Specific to Performance Efficiency, at a high level you should be thinking about the following areas and processes:

  • Review your workload using the performance efficiency checklist
  • Understand Performance Principals to assist with your strategy
  • Design for performance
  • Plan for growth and consider scalability
  • Use the correct design pattern to build a performant workload
  • Consider trade-offs such as security, cost, efficiency and operability.

Performance Efficiency Principals

When designing for Operational Excellence in Azure, there are a set of principles covered in the Framework that you must think about, those principles include:

  • Design for horizontal scaling by understanding business requirements, service demands, tooling and cloud service options. Horizontal scaling allows for elasticity. Instances are added (scale-out) or removed (scale-in) in response to changes in load. Scaling out can improve resiliency by building redundancy. Scaling in can help reduce costs by shutting down excess capacity. Ensure you apply performance strategies early in design. Define a capacity model that you’re your business requirements then go on to test applications at the upper demand limits. Utilise Azure PaaS offerings that allow you to take advantage of automatic scaling features and reduce management effort.
  • Test early and test often to catch issues in the design process. Stress tests and load tests are great ways to measure an application’s performance under a specific load or even maximum loads. It’s important you establish performance baselines by understanding the current efficiency of the application and its supporting infrastructure. Use continuous performance testing throughout any development effort to ensure codebase changes don’t affect performance.  
  • Continuously monitor performance in production by observing the workload as a whole to understand the overall health of the solution. A workload is only as strong as its weakest part, this is why it’s very important to monitor the health of the entire solution and not just specific parts or services. Measure infrastructure, applications and dependant services against scalability and resiliency. Ensure you re-evaluate the needs of the workloads continuously to identify improvement opportunities.

Performance Efficiency Recommendations & Tips

Some of the best tips or recommendations for Performance Efficiency are as follows:

  • Autoscale Use Azure services that can scale automatically or based on a schedule before looking to create custom scaling workloads and services.
  • Avoid Client Affinity – By avoiding cloud affinity, you ensure requests are routed to any instance. This means the number of instances is irrelevant and scaling will be simpler.
  • Offload Intensive Tasks – Using worker roles or background jobs you can take a resource-heavy process and offload it to a separate task. This enables the service to continue receiving requests and remain responsive
  • Data Partitioning – Maximise performance and allow simpler scaling by splitting data across databases and servers. Understand and implement the correct data partitioning technique including horizontal, vertical and functional.
  • Use Caching – Use caching wherever possible to reduce the load on resources and services that generate or deliver data. Caching is typically suited to data that is relatively static, or that requires considerable processing to obtain.
  • Capacity Planning – Load can be impacted by world events, such as political, economic, or weather changes. Test variations of load prior to events, including unexpected ones, to ensure that your application can scale.

Conclusion

Over the last five blog posts, we have covered the Azure Well-Architected framework including its five pillars and principles and shared some useful tips along the way.

As mentioned previously, a great place to further your understanding of the framework whilst reviewing a current workload is the Well-Architected Review located here alongside Microsoft Learn documentation.

For a more in-depth Architecture Review or a specific Performance Efficiency review feel free to reach out to the Transparity Azure Cloud Services team.

Find out more about Azure

Your competition doesn’t stand still and neither does cloud. Establishing and maintaining your cloud environment needs to be approached as a continuous cycle to remain competitive by taking advantage of the latest cloud capabilities. From assessment to design and build through to modernisation, we don’t believe in taking a ‘set and forget’ approach to your cloud.

As always, Microsoft Ignite came packed with major updates across the Microsoft stack. And Azure is no exception.

With a focus on “Do more with less in the Microsoft Cloud”, Microsoft Ignite aimed to enable customers to:

  • Empower everyone for a new world of hybrid work,
  • Build a hyperconnected business,
  • Innovate anywhere from multicloud to edge, and
  • Protect everything with end-to-end security.

As premium event sponsors, we had a front-row seat to the exciting updates Microsoft announced. Here, we’ll share our key takeaways from the event and the updates we think you need to know about in Azure.

Of course, some of these updates are quite technical – if you have any questions or want to discuss how these updates affect your cloud environment don’t hesitate to get in touch.

General Availability: Azure Savings Plan for Compute

Customers commit to spending a fixed hourly amount for one or three years on compute services – paid all upfront or monthly.

As you use select compute services across the world, your usage is covered by the plan at reduced prices, helping you get more value from your cloud budget. If you go over your committed usage, you’ll be billed at the regular pay-as-you-go prices. Savings automatically apply across compute usage globally.

Customers may see estimated savings between 11%-65%.

Public Preview: Azure Container Apps Azure Monitor Integration!

By default all logs are sent to Log Analytics, this new integration will let you send logs to Azure Monitor and choose where to send the logs.

You can now leverage Azure Storage, Event Hubs and other partner solutions. This provides a single pane of glass for monitoring by allowing you to use Azure Monitor to Monitor Container apps.

Generally Available: Azure Automanage for VMs and Arc-Enabled Servers.

New capabilities added to Azure Automanage, this allows you to save time, reduce risk and improve workload uptime by automating day-to-day configuration and management tasks.

    • Apply enhanced backup settings and auditing modes for server baselines
    • Specifying custom Log Analytics workspace and Azure tags to identify resources based on settings relevant to an organisation.
    • Support for Windows 10 VMs

Public Preview: IP Protection SKU for Azure DDOS Protections

New SKU To enable DDoS protection on individual public Ips. IP Protection will contain the same features as Network protection.

Network protection will gain the following services:

  • DDoS Rapid Response support
  • Cost protection
  • Integration with Azure Firewall Manager
  • Discounts on Azure Web Application Firewall

Billing is effective as of 1 Feb 2023.

Generally Available: Windows server 2022 Host support in AKS

Windows Server 2022 is now supported on AKS bringing security improvements, available for Kubernetes v1.23 and higher.

General availability: Confidential VM option for SQL Server on Azure Virtual Machines

New option for SQL servers on VMs ensure the data in use as well as the data at rest stored on your VM’s drives, are inaccessible to unauthorized users from the outside of the VM without changing the code of the SQL Applications.

Public Preview: Azure Resource Topology

ART is replacing the Network watcher topology. ART will allow users to draw a unified topology across multiple subscriptions, regions and resource groups.

ART will allow deep dives into the environments layout, it also allows for monitoring/diagnostics with the capability of running ‘Next Hop’ directly from a resource within the ART view after specifying a destination. Plus:

  • Selecting a resource will highlight all nodes/connected resources to the resource.
  • Side by Side comparisons of Regions/VNETs/Subnets

General Availability: Azure Monitor predictive autoscale for Azure VMSS

Azure Monitor can forecast CPU Load to your VMMSS based on historical CPU usage patterns and scale-out occurs in time to meet demand. You can configure how far in advance the new instances are provisioned and view predicted CPU forecast without triggering the scaling action with forecast only mode.

Generally available: Windows Admin Centre for Azure Virtual Machines –

Access WAC within Azure – perform maintenance and troubleshooting tasks such as managing your files, viewing your events, monitoring your performance, getting an in-browser RDP and PowerShell session,

  • Perform more actions within Azure
  • Lesser requirement to RDP to VMs for Admin, simplifying the experience
  • Features SSO using AAD regardless if VM is joined on prem, joined to AAD or not at all
  • Should reduce reliance on local admin accounts when managing servers in Azure
  • Available on Windows Server 2016 or above
  • Get to the Windows Admin Centre blade under Settings in the Virtual Machine Azure portal UI!

Find out more about Azure

Your competition doesn’t stand still and neither does cloud. Establishing and maintaining your cloud environment needs to be approached as a continuous cycle to remain competitive by taking advantage of the latest cloud capabilities. From assessment to design and build through to modernisation, we don’t believe in taking a ‘set and forget’ approach to your cloud.

The Azure Well-Architected Framework is a set of guidelines spanning five key pillars that can be used to optimise your workloads. In the previous blogs we covered Reliability, Security and Cost Optimisation alongside relevant services, processes and assessments. This time we’ll focus on the Operational Excellence pillar of the framework. 

Overview of Operational Excellence

The services and technologies you use in the cloud differ hugely compared to those on-premises. But, what doesn’t differ is the requirement that all deployments and environments are reliable and predictable. Operational excellence is the forth pillar of the Well-Architected framework that covers the operational processes you require to ensure applications continue to operate.

The key processes that fall within operational excellence are Workload Automation, Workload Release, Monitoring and Testing.  The end goal is to achieve superior operational practices.

Similar to the previous Security and Cost Optimisation pillars, Operational Excellence must be thought about throughout the lifecycle of a workload, including design and architecture phases, but especially once the workload is running. The management of a service and the related processes should not be retrofitted to environments or services, you must think about these areas early on as it will reduce management overhead in the long term.

A Well-Architected workload viewed through the lens of Operational Excellence is a workload this is released in an automated manner, monitored and tested in an efficient way to ensure the application provides value not just to your customers, but to your internal development and operations teams.

Specific to Operational Excellence, at a high-level you should be thinking about the following areas and processes:

  • Design, build and orchestrate workloads with DevOps principals in mind
  • Monitor workloads efficiently using Azure Monitor
  • Understand Application Performance Management
  • Automate as many processes as possible
  • Create and automate repeatable infrastructure
  • Prepare for the unexpected by testing workloads

Operational Excellence Principals

When designing for Operational Excellence in Azure, there are a set of principals covered in the Framework that you must think about, those principles include:

  • Optimise build and release processes by embracing software engineering disciplines. Infrastructure should be deployed via code (IaC) alongside Continuous integration and delivery pipelines that should be used for build and release (CI/CD). Automate testing plans and avoid any configuration drift using configuration as code. Azure DevOps and Azure Policy are two tools which can assist greatly in optimising build, release and configuration drift.
  • Understand operational health by using tools and processes that monitor all aspects of a workload including but not limited to build and release processes, infrastructure health and application health. Allow your teams to be proactive instead of reactive by observing workloads and correlating events to truly understand the workload health and performance.
  • Rehearse recovery and practice failure by running disaster recovery (DR) drills at regular intervals to validate and understand the effectiveness of your recovery processes, and the responsibilities of internal teams. Use chaos engineering practices to identify weak points in applications via services such as Azure Chaos Studio.
  • Embrace continuous operational improvement to reduce complexity and ambiguity where possible via continuously evaluating and refining operational processes and tasks. It’s important processes are always being evolved over time and that inefficiencies are optimised. Most importantly, always learn from your failures.
  • Use loosely coupled architectures such as microservices and serverless technologies that allow teams to build and deploy services independently to minimise service failures or impact on a large scale. It’s also important to think about cloud design patterns such as circuit breakers, load-levelling and throttling.

Operational Excellence Recommendations & Tips

Some of the best tips or recommendations for operational excellence are as follows:

Azure Policy

Azure policy is a free Azure service that allows you to enforce resource-level rules across your Azure estate that can assist in the adoption on operational best practices. Azure Policy is also a great tool for configuration drift management and monitoring. For example, Azure Policy can ensure all workloads adhere to a specific set of security rules such as HTTPS usage or TLS.

Azure Advisor

Azure Advisor is a fantastic resource that provides a set of Azure Policy recommendations that, in turn, can be used to identify opportunities to implement best practices across your workloads.

DevOps Checklist

Use the DevOps checklist to review your design and management from a DevOps Standpoint. The checklist covers culture, development, testing, release, monitoring and management. The checklist can be found here

Strangler

Strangler Fig is a cloud design pattern that covers incrementally migrating a legacy system by gradually replacing specific pieces of functionality with new apps or services. Eventually, the older system is ‘strangled’ by the new system and eventually it takes over.

Team structure

Take time to understand and plan your operating model and internal teams. For example, managing loosely coupled architecture requires procedural decoupling as teams shouldn’t have to depend on partner teams to support, approve or operate their workloads.

Review your workloads

We will continue to cover the remaining pillars throughout this series of blogs. As highlighted on previous posts, you can review your current posture against the five well-architected pillars. The tool is free and can be accessed here.

For a more in-depth Architecture Review or a specific Operational Excellence Review feel free to reach out to our Azure Cloud Experts.

Find out more about Azure

Your competition doesn’t stand still and neither does cloud. Establishing and maintaining your cloud environment needs to be approached as a continuous cycle to remain competitive by taking advantage of the latest cloud capabilities. From assessment to design and build through to modernisation, we don’t believe in taking a ‘set and forget’ approach to your cloud.
[mwai_chatbot id="chatbot-8s20vg"]
Skip to content